Video tag processing method and device

文档序号:7779 发布日期:2021-09-17 浏览:40次 中文

1. A method for processing video tags is characterized by comprising the following steps:

obtaining a video tag sequence of a target social video, wherein the video tag sequence comprises a plurality of video tags;

selecting a theme video tag from the video tag sequence based on the position of the video tag in the video tag sequence and the sum of playing times corresponding to a social video containing the video tag in a social platform;

generating a word vector corresponding to the video tag based on the video tag, and generating a word vector corresponding to the theme video tag based on the theme video tag;

calculating the similarity between the average value of the word vectors corresponding to the video tags and the average value of the word vectors corresponding to the subject video tags;

based on the similarity, determining a topic drift result between the video tags contained in the video tag sequence and the target social video.

2. The method for processing the video tag according to claim 1, wherein the selecting the theme video tag in the video tag sequence based on the sum of the position of the video tag in the video tag sequence and the playing times corresponding to the social video including the video tag in the social platform comprises:

determining the number of video pieces of the social video containing the video tag in the social platform;

deleting the video tags of which the number of the video tags is less than a preset threshold value of the number of the video tags in the video tag sequence, and generating an updated video tag sequence;

and selecting the theme video tag from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

3. The method for processing the video tag according to claim 1, wherein the selecting the theme video tag in the video tag sequence based on the sum of the position of the video tag in the video tag sequence and the playing times corresponding to the social video including the video tag in the social platform comprises:

matching the video tags with video tags in a predetermined video tag library, wherein the predetermined video tag library is a video tag library containing meaningless video tags;

deleting the video tags matched with the video tags in a preset video tag library from the video tag sequence to generate an updated video tag sequence;

and selecting the theme video tag from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

4. The method for processing the video tag according to claim 1, wherein the selecting the theme video tag in the video tag sequence based on the sum of the position of the video tag in the video tag sequence and the playing times corresponding to the social video including the video tag in the social platform comprises:

determining the number of videos of the social videos containing the video tags in the social platform, and determining the total number of videos of the social videos contained in the social platform;

calculating the ratio of the total number of videos of the social videos contained in the social platform to the number of videos of the social videos containing the video tags in the social platform;

determining the importance degree between the video tags contained in the video tag sequence and the target social video based on the ratio, wherein the importance degree and the ratio are in a negative correlation relationship;

deleting the video tags with the importance degrees smaller than a preset importance degree threshold value in the video tag sequence to generate an updated video tag sequence;

and selecting the theme video tag from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

5. The method for processing the video tag according to claim 1, wherein the selecting the theme video tag in the video tag sequence based on the sum of the position of the video tag in the video tag sequence and the playing times corresponding to the social video including the video tag in the social platform comprises:

determining a topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video containing the video tag in the social platform, wherein the sequence number corresponding to the topic matching degree and the position is in a negative correlation relationship, and the topic matching degree and the sum of the playing times corresponding to the social video containing the video tag in the social platform are in a positive correlation relationship;

and selecting a subject video tag from the video tag sequence based on the subject matching degree between the video tag and the target social video.

6. The method for processing the video tag according to claim 5, wherein the determining the topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social videos containing the video tag in the social platform comprises:

determining the topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence, the sum of the playing times corresponding to the social video containing the video tag in the social platform and the sum of the number of comments of the social video containing the video tag in the social platform, wherein the topic matching degree and the sum of the number of comments are in positive correlation.

7. The method for processing the video tag according to claim 6, wherein the determining the topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence, the sum of the playing times corresponding to the social videos containing the video tag in the social platform and the sum of the number of comments of the social videos containing the video tag in the social platform comprises:

calculating the sum of the playing times corresponding to the social videos containing the video tags in the social platform and the weighted sum of the comment numbers of the social videos containing the video tags in the social platform;

determining a topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence and the weighted sum.

8. The method for processing the video tag according to claim 1, wherein the determining the subject drift result between the video tag included in the video tag sequence and the target social video based on the similarity comprises:

if the similarity is lower than a preset similarity threshold, determining that theme drift is generated between the video tags contained in the video tag sequence and the target social video;

and if the similarity is higher than or equal to a preset similarity threshold, determining that no theme shift is generated between the video tags contained in the video tag sequence and the target social video.

9. The method for processing the video tag according to claim 8, further comprising:

and if the similarity is lower than a preset similarity threshold, adding the target social video to a search blacklist which cannot be searched by social users.

10. A video tag processing apparatus, comprising:

the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a video tag sequence of a target social video, and the video tag sequence comprises a plurality of video tags;

the selecting unit is used for selecting a theme video tag in the video tag sequence based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video containing the video tag in the social platform;

the generating unit is used for generating a word vector corresponding to the video label based on the video label and generating a word vector corresponding to the theme video label based on the theme video label;

the calculating unit is used for calculating the similarity between the average value of the word vectors corresponding to the video labels and the average value of the word vectors corresponding to the subject video labels;

and the execution unit is used for determining a theme drift result between the video tag contained in the video tag sequence and the target social video based on the similarity.

Background

With the development of internet technology, users in a social platform can publish social videos in the social platform for other users in the social platform to search and browse.

When a user uploads a social video to a social platform, video tags related to video content may be added to the social video, for example, one or more video tags may be added in the form of hashtags, and these video tags enable the user to search for the social video in the social platform as needed.

The video tags added to the social videos by the users are completely set by the users in a user-defined mode, so that the situation that the video tags added by some users are seriously inconsistent with the mainstream theme of the social videos exists, when the users search the social videos in the social platform according to the video tags, the situation that the theme drifts between the mainstream theme reflected by the searched social videos and the video tags of the social videos exists, and therefore how to quickly detect the theme drift situation between the theme reflected by the social videos and the video tags of the social videos becomes a technical problem to be solved urgently.

Disclosure of Invention

The embodiment of the application provides a method and a device for processing video tags, which can solve the technical problem that topic drift exists between searched social videos and video tags of the social videos in the related technology.

Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.

According to an aspect of an embodiment of the present application, there is provided a method for processing a video tag, including: obtaining a video tag sequence of a target social video, wherein the video tag sequence comprises a plurality of video tags; selecting a theme video tag from the video tag sequence based on the position of the video tag in the video tag sequence and the sum of playing times corresponding to a social video containing the video tag in a social platform; generating a word vector corresponding to the video tag based on the video tag, and generating a word vector corresponding to the theme video tag based on the theme video tag; calculating the similarity between the average value of the word vectors corresponding to the video tags and the average value of the word vectors corresponding to the subject video tags; based on the similarity, determining a topic drift result between the video tags contained in the video tag sequence and the target social video.

According to an aspect of an embodiment of the present application, there is provided a video tag processing apparatus, including: the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a video tag sequence of a target social video, and the video tag sequence comprises a plurality of video tags; the selecting unit is used for selecting a theme video tag in the video tag sequence based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video containing the video tag in the social platform; the generating unit is used for generating a word vector corresponding to the video label based on the video label and generating a word vector corresponding to the theme video label based on the theme video label; the calculating unit is used for calculating the similarity between the average value of the word vectors corresponding to the video labels and the average value of the word vectors corresponding to the subject video labels; and the execution unit is used for determining a theme drift result between the video tag contained in the video tag sequence and the target social video based on the similarity.

In some embodiments of the present application, based on the foregoing solution, the selecting unit is configured to: determining the number of video pieces of the social video containing the video tag in the social platform; deleting the video tags of which the number of the video tags is less than a preset threshold value of the number of the video tags in the video tag sequence, and generating an updated video tag sequence; and selecting the theme video tag from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

In some embodiments of the present application, based on the foregoing solution, the selecting unit is configured to: matching the video tags with video tags in a predetermined video tag library, wherein the predetermined video tag library is a video tag library containing meaningless video tags; deleting the video tags matched with the video tags in a preset video tag library from the video tag sequence to generate an updated video tag sequence; and selecting the theme video tag from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

In some embodiments of the present application, based on the foregoing solution, the selecting unit is configured to: determining the number of videos of the social videos containing the video tags in the social platform, and determining the total number of videos of the social videos contained in the social platform; calculating the ratio of the total number of videos of the social videos contained in the social platform to the number of videos of the social videos containing the video tags in the social platform; determining the importance degree between the video tags contained in the video tag sequence and the target social video based on the ratio, wherein the importance degree and the ratio are in a negative correlation relationship; deleting the video tags with the importance degrees smaller than a preset importance degree threshold value in the video tag sequence to generate an updated video tag sequence; and selecting the theme video tags from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

In some embodiments of the present application, based on the foregoing solution, the selecting unit is configured to: determining a topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video containing the video tag in the social platform, wherein the sequence number corresponding to the topic matching degree and the position is in a negative correlation relationship, and the topic matching degree and the sum of the playing times corresponding to the social video containing the video tag in the social platform are in a positive correlation relationship; and selecting a subject video tag from the video tag sequence based on the subject matching degree between the video tag and the target social video.

In some embodiments of the present application, based on the foregoing solution, the selecting unit is configured to: determining the topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence, the sum of the playing times corresponding to the social video containing the video tag in the social platform and the sum of the number of comments of the social video containing the video tag in the social platform, wherein the topic matching degree and the sum of the number of comments are in positive correlation.

In some embodiments of the present application, based on the foregoing solution, the selecting unit is configured to: calculating the sum of the playing times corresponding to the social videos containing the video tags in the social platform and the weighted sum of the comment numbers of the social videos containing the video tags in the social platform; determining a topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence and the weighted sum.

In some embodiments of the present application, based on the foregoing solution, the execution unit is configured to: if the similarity is lower than a preset similarity threshold, determining that theme drift is generated between the video tags contained in the video tag sequence and the target social video; and if the similarity is higher than or equal to a preset similarity threshold, determining that no theme shift is generated between the video tags contained in the video tag sequence and the target social video.

In some embodiments of the present application, based on the foregoing solution, the processing apparatus of the video tag further includes: and the adding unit is used for adding the target social video to a search blacklist which cannot be searched by a social user if the similarity is lower than a preset similarity threshold.

According to an aspect of embodiments of the present application, there is provided a computer-readable medium on which a computer program is stored, the computer program, when executed by a processor, implementing the processing method of the video tag as described in the above embodiments.

According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of processing a video tag as described in the above embodiments.

According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the processing method of the video tag provided in the above-mentioned various optional embodiments.

In the technical solutions provided in some embodiments of the present application, a theme video tag that conforms to a theme reflected by a social video is selected from a video tag sequence based on a position of the video tag in the video tag sequence and a sum of playing times corresponding to the social video including the video tag in a social platform, and a matching condition between the video tag included in the video tag sequence and the theme reflected by the social video is determined by calculating a similarity between an average value of word vectors corresponding to the video tag and an average value of word vectors corresponding to the theme video tag through the similarity, so that a theme drift condition existing between the theme reflected by the social video and the video tag of the social video is accurately detected.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:

fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.

Fig. 2 schematically shows a flow chart of a method of processing a video tag according to an embodiment of the application.

Fig. 3 schematically shows a detailed flowchart of step S220 in an embodiment according to the present application.

Fig. 4 schematically shows a flow chart of a method of processing a video tag according to an embodiment of the application.

Fig. 5 schematically shows a detailed flowchart of step S220 in an embodiment according to the present application.

Fig. 6 schematically shows a detailed flowchart of step S220 in an embodiment according to the present application.

Fig. 7 schematically shows a detailed flowchart of step S220 in an embodiment according to the present application.

Fig. 8 shows a block diagram of a processing device of a video tag according to an embodiment of the application.

FIG. 9 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.

The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.

The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.

Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.

The scheme that this application embodiment provided relates to techniques such as machine learning, the characteristic information of different enterprises can be reflected to the enterprise characteristic, the characteristic information of different government affairs matters can be reflected to the government affairs matter characteristic, the eigenvector after the integration is generated based on enterprise characteristic and government affairs matter characteristic, and the frequency of handling of all kinds of government affairs matters of enterprise is predicated based on the eigenvector after the integration, demand difference when can effectively consider that different natures of enterprise handles all kinds of government matters, and then recommend the government affairs matter that its business relevance is high to the enterprise, the degree of accuracy of the government affairs that the prediction enterprise needs to handle has been improved, make things convenient for the enterprise to find the government affairs matter that is fit for handling fast.

Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.

As shown in fig. 1, the system architecture may include a client 101, a network 102, and a server 103. The client 101 and the server 103 are connected via a network 102, and perform data interaction based on the network 102, which may include various connection types, such as wired communication links, wireless communication links, and so on.

It should be understood that the number of clients 101, networks 102, and servers 103 in fig. 1 is merely illustrative. There may be any number of clients 101, networks 102, and servers 103, as desired for implementation.

The server 103 acquires a video tag sequence of a target social video, wherein the video tag sequence comprises a plurality of video tags; selecting a theme video tag from the video tag sequence based on the position of the video tag in the video tag sequence and the sum of playing times corresponding to the social video containing the video tag in the social platform; generating a word vector corresponding to the video label based on the video label, and generating a word vector corresponding to the theme video label based on the theme video label; calculating the similarity between the average value of the word vectors corresponding to the video tags and the average value of the word vectors corresponding to the subject video tags; based on the similarity, determining a theme shift result between the video tag contained in the video tag sequence and the target social video.

As can be seen from the above, by selecting the theme video tags that accord with the theme reflected by the social video in the video tag sequence based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video including the video tag in the social platform, and by calculating the similarity between the average value of the word vectors corresponding to the video tag and the average value of the word vectors corresponding to the theme video tags, the matching situation between the video tag included in the video tag sequence and the theme reflected by the social video is determined according to the similarity, thereby realizing accurate detection of the theme drift situation existing between the theme reflected by the social video and the video tag of the social video.

It should be noted that the processing method for the video tag provided in the embodiment of the present application is generally executed by the server 103, and accordingly, the processing device for the video tag is generally disposed in the server 103. However, in other embodiments of the present application, the client 101 may also have a similar function as the server 103, so as to execute the scheme of the processing method for the video tag provided in the embodiments of the present application. The details of implementation of the technical solution of the embodiments of the present application are set forth in the following.

Fig. 2 schematically shows a flow chart of a method of processing a video tag according to an embodiment of the present application, which may be performed by a server, which may be the server shown in fig. 1. Referring to fig. 2, the processing method of the video tag at least includes steps S210 to S250, which are described in detail as follows.

In step S210, a video tag sequence of the target social video is obtained, where the video tag sequence includes a plurality of video tags.

In an embodiment of the application, the target social video is a social video published in the social platform by a user of the social platform, and after the target social video is published in the social platform, any user in the social platform can browse the social video.

The user may add one or more video tags to the target social video, which refers to a custom video tag added by the user, which may be a well number tag (hashtag), such as "# travel", "# autumn". The video tags added to the target social videos by the user can be used as keywords for searching by a social video search engine in the social platform, and the video tags can be used as labels of the social videos and displayed together with the social videos.

The video tag sequence is a tag sequence formed by adding video tags in an adding sequence when a user publishes a target social video, wherein the adding sequence of the video tags corresponds to the positions of the video tags in the video tag sequence, namely, the first added video tag is positioned at the first position of the video tag sequence, and so on.

In step S220, a theme video tag is selected from the video tag sequence based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video including the video tag in the social platform.

In one embodiment of the present application, a topic video tag refers to a type of tag with a high matching degree with a topic reflected by a target social video. When a user adds a video tag to a target social video, a video tag matched with a theme is generally added according to the theme reflected by the target social video, and in general, among the video tags added to the target social video by the user, the video tag added first is a type of video tag with a high matching degree with the theme reflected by the target social video.

Based on this, the position of the video tag included in the video tag sequence may be a factor in determining whether the video tag is a subject video tag.

When a target social video is uploaded to a social platform, if a video tag added to the target social video by a user is a theme video tag, the possibility that other users play the target social video after searching the target social video through the theme video tag is relatively high, and the playing times of the target social video are relatively large; on the contrary, if the video tag added to the target social video by the user is not the theme video tag, the possibility that other users play the target social video after searching the target social video through the theme video tag is relatively low, and the playing frequency of the target social video is relatively low.

Based on this, the sum of the playing times corresponding to all social videos including a certain video tag in the social platform may be used as a factor for determining whether the video tag is the topic video tag of the target social video.

In an embodiment of the application, for all video tags included in a video tag sequence, whether a video tag is a qualified theme video tag or not may be determined based on parameter values corresponding to two factors, namely, a position of the video tag in the video tag sequence and a sum of playing times corresponding to social videos including the video tag in a social platform.

In an embodiment of the present application, as shown in fig. 3, a specific flowchart of step S220 in an embodiment of the present application is schematically shown, and step S220 may specifically include the following steps S310 to S320, which are described in detail below.

In step S310, based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video including the video tag in the social platform, a topic matching degree between the video tag and the target social video is determined, the sequence number corresponding to the topic matching degree and the position is a negative correlation, and the topic matching degree and the sum of the playing times corresponding to the social video including the video tag in the social platform are a positive correlation.

In an embodiment of the application, the topic matching degree between the video tag and the target social video is used as an evaluation value for the matching degree between the video tag and the topic reflected by the target social video, and the larger the evaluation value is, the more likely the video tag is the topic video tag.

The more forward the video tags contained in the video tag sequence are located in the video tag sequence, the smaller the corresponding sequence number is, and the probability that the video tags are the subject video tags is higher, so that the matching degree of the subject and the sequence number corresponding to the video tags contained in the video tag sequence are in a negative correlation relationship.

The greater the sum of the playing times corresponding to the social video containing the video tag in the social platform is, the higher the possibility that the social video is the subject video tag is, and the positive correlation relationship is formed between the subject matching degree and the sum of the playing times corresponding to the social video containing the video tag in the social platform.

In one embodiment of the present application, the method may be based on a formulaTo calculate the topic matching degree between the video label and the target social video, wherein A1The video tags contained in the video tag sequence are numbered in the order corresponding to the positions of the video tags in the video tag sequence, B1As a social platformThe sum of the playing times corresponding to the social video containing the video tag, S1The topic matching degree between the video label and the target social video is obtained.

In one embodiment of the present application, it may also be based on a formulaCalculating the subject matching degree between the video label and the target social video, wherein C and D are normal numbers, A2A sequence number corresponding to the position of the video label in the video label sequence, A2Is the sum of the playing times S corresponding to the social video containing the video tag in the social platform2The topic matching degree between the video label and the target social video is obtained.

In an embodiment of the present application, step S310 may specifically include: determining the topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence, the sum of the playing times corresponding to the social video containing the video tag in the social platform and the sum of the number of the comments of the social video containing the video tag in the social platform, wherein the topic matching degree and the sum of the number of the comments are in positive correlation.

In this embodiment, when a target social video is uploaded to a social platform, if a video tag added to the target social video by a user is a theme video tag, the possibility that other users play the target social video after searching the target social video through the theme video tag is relatively high, the number of playing times of the target social video is relatively large, and meanwhile, the possibility that comments are initiated to the social video is relatively large, so that the number of comments of the target social video is relatively large; on the contrary, if the video tag added to the target social video by the user is not the theme video tag, the possibility that other users play the target social video after searching the target social video through the theme video tag is relatively low, the playing times of the target social video are relatively low, the possibility that comments are initiated to the social video is relatively low, and the number of the comments of the target social video is relatively small.

Based on this, the sum of the number of comments corresponding to all social videos including a certain video tag in the social platform can be used as a factor for calculating the topic matching degree between the video tag and the target social video. It can be understood that the sum of the topic matching degree and the number of the comments corresponding to the social video containing the video tag in the social platform is in a positive correlation.

For each video tag contained in the video tag sequence, the topic matching degree between the video tag and the target social video can be calculated based on parameter values corresponding to three factors, namely the position of the video tag in the video tag sequence, the sum of the playing times corresponding to the social videos containing the video tag in the social platform and the sum of the number of comments corresponding to all social videos containing a certain video tag in the social platform.

In one embodiment of the application, when the theme matching degree between the video tag and the target social video is calculated based on the parameter values corresponding to the three factors of the position of the video tag in the video tag sequence, the sum of the playing times corresponding to the social videos containing the video tag in the social platform and the sum of the number of the comment items corresponding to all the social videos containing a certain video tag in the social platform, the sum of the playing times corresponding to the social videos containing the video tags in the social platform and the sum of the parameter values corresponding to the two factors of the sum of the number of the comments corresponding to all the social videos containing a certain video tag in the social platform can be calculated, and calculating the topic matching degree between the video tag and the target social video according to the sum of the parameter values corresponding to the two factors and the position of the video tag in the video tag sequence.

Alternatively, it may be based on a formulaTo calculate the topic matching degree between the video label and the target social video, wherein A1A sequence number corresponding to the position of the video label in the video label sequence, E3For social contactThe station includes the sum of the number of comments corresponding to the social video of the video tag, B1Is the sum of the playing times S corresponding to the social video containing the video tag in the social platform1The topic matching degree between the video label and the target social video is obtained.

Referring to fig. 4, fig. 4 schematically shows a flowchart of a processing method of a video tag according to an embodiment of the present application, where in this embodiment, the step of determining the topic matching degree between the video tag and the target social video may specifically include steps S410 to S420 based on a position of the video tag in a video tag sequence, a sum of playing times corresponding to social videos containing the video tag in the social platform, and a sum of number of comments of the social videos containing the video tag in the social platform, and is described in detail as follows.

In step S410, a weighted sum of the number of plays corresponding to the social video containing the video tag in the social platform and the sum of the number of comments of the social video containing the video tag in the social platform is calculated.

In step S420, a topic matching degree between the video tag and the target social video is determined based on the position of the video tag in the video tag sequence and the weighted sum.

In one embodiment of the application, when the theme matching degree between the video tag and the target social video is calculated based on the parameter values corresponding to the three factors of the position of the video tag in the video tag sequence, the sum of the playing times corresponding to the social videos containing the video tag in the social platform and the sum of the number of the comment items corresponding to all the social videos containing a certain video tag in the social platform, the weighted sum of the parameter values corresponding to the two factors of the sum of the playing times corresponding to the social videos containing the video tags in the social platform and the sum of the number of the comment pieces corresponding to all the social videos containing a certain video tag in the social platform can be calculated, and calculating the topic matching degree between the video tag and the target social video according to the weighted sum of the parameter values corresponding to the two factors and the position of the video tag in the video tag sequence.

Alternatively, it may be based on a formulaTo calculate the topic matching degree between the video label and the target social video, wherein A4The video tags contained in the video tag sequence are numbered in the order corresponding to the positions of the video tags in the video tag sequence, B4Sum of the number of plays corresponding to the social video including the video tag in the social platform, E4The method comprises the steps of assigning a weight to the sum of the number of comment pieces corresponding to the social video containing the video tag in the social platform, gamma being the weight assigned to the sum of the number of playing times corresponding to the social video containing the video tag in the social platform, delta being the weight assigned to the sum of the number of comment pieces corresponding to the social video containing the video tag in the social platform, and S4The topic matching degree between the video label and the target social video is obtained.

Compared with the embodiment that the theme matching degree between the video tag and the target social video is calculated by using the sum of the playing times corresponding to the social videos containing the video tag in the social platform and the sum of the parameter values corresponding to the two factors, namely the sum of the comment numbers corresponding to all the social videos containing a certain video tag in the social platform, the embodiment shown in fig. 4 has the advantages that different effects of the sum of the playing times corresponding to the social videos containing the video tag in the social platform and the comment numbers corresponding to all the social videos containing a certain video tag in the social platform in determining the theme matching degree between the video tag and the target social video are fully considered, the factor for determining the theme matching degree is highlighted, and the determined theme matching degree reflects objective reality.

Still referring to fig. 3, in step S320, a topic video tag is selected from the video tag sequence based on the topic matching degree between the video tag and the target social video.

In one embodiment of the present application, there may be multiple ways to select a topic video tag from video tags included in a video tag sequence based on a topic matching degree between the video tag and a target social video.

Optionally, among the video tags included in the video tag sequence, a predetermined number of video tags with the maximum topic matching degree may be selected as topic video tags, and if the number of the video tags included in the video tag sequence is 4 and the predetermined number is set to be 2, among the 4 video tags included in the video tag sequence, 2 video tags with the maximum topic matching degree are selected as topic video tags.

Optionally, among the video tags included in the video tag sequence, a video tag with a predetermined percentage having a high topic matching degree may be selected as a topic video tag, and if the video tags included in the video tag sequence are 5 video tags, and the predetermined percentage is set to 0.6 video tags, among the 5 video tags included in the video tag sequence, 3 video tags with a maximum topic matching degree are selected as topic video tags.

In an embodiment of the present application, as shown in fig. 5, a specific flowchart of step S220 in an embodiment of the present application is schematically shown, and step S220 may specifically include the following steps S510 to S530, which are described in detail below.

In step S510, the number of video pieces of the social video containing the video tag in the social platform is determined.

In an embodiment of the application, since the video tag added by the user for the target social video is a user-defined video tag, there may be video tags obviously not associated with the topic reflected by the social video, such as video tags input in a cluttered state, and since such video tags are not video tags known by most users, a small part of social videos uploaded in the social platform may also include the video tag, so that video tags not associated with the topic reflected by the target social video may be found from the video tag sequence of the target social video according to the number of video pieces of the social videos including the video tag in the social platform.

When the number of the video strings of the social video containing a certain video tag in the social platform is determined, specifically, for a certain social video in the social platform, a corresponding video tag sequence is obtained, for the corresponding video tag sequence, each video tag in the video tag sequence is used as a character string, the character string is matched with the character string corresponding to the video tag, and only when the two character strings are completely consistent, the social video is considered as the social video containing the video tag.

In step S520, in the video tag sequence, the video tags with the number of video pieces smaller than the predetermined threshold number of video pieces are deleted, and an updated video tag sequence is generated.

In an embodiment of the application, for each video tag in a video tag sequence, the number of video tags of a social video including the video tag in a social platform is respectively determined, and for the video tags whose number of video tags is less than a predetermined threshold value of the number of video tags, the video tags are deleted from the video tag sequence, so that an updated video tag sequence is generated. The predetermined video number threshold is a value defined according to the actual situation, and specifically, may be set to a positive integer, such as 200.

In step S530, based on the positions of the remaining video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos in the social platform that include the remaining video tags, a theme video tag is selected from the updated video tag sequence.

In an embodiment of the application, after the updated video tag sequence is obtained, the theme video tag may be selected from the updated video tag sequence based on a position of the remaining video tags in the updated video tag sequence and a parameter value corresponding to a sum of playing times corresponding to the social video including the video tag in the social platform.

In the technical solution of the embodiment shown in fig. 5, for all video tags corresponding to the target social video, video tags smaller than the threshold of the predetermined number of video pieces are deleted from the video tag sequence of the target social video, so that video tags that are obviously not theme video tags in the video tag sequence are removed, and compared with a method of directly finding out theme video tags from all video tags included in the video tag sequence, unnecessary data processing is reduced, and the efficiency of finding out theme video tags from the video tag sequence of the target social video can be improved.

In an embodiment of the present application, as shown in fig. 6, a specific flowchart of step S220 in an embodiment of the present application is schematically shown, and step S220 may specifically include the following steps S610 to S630, which are described in detail below.

In step S610, the video tags are matched with video tags in a predetermined video tag library, which is a video tag library containing meaningless video tags.

In step S620, the video tags matching the video tags in the predetermined video tag library are deleted from the video tag sequence, and an updated video tag sequence is generated.

In step S630, based on the positions of the remaining video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos in the social platform that include the remaining video tags, a theme video tag is selected from the updated video tag sequence.

In an embodiment of the application, the meaningless video tags are video tags that cannot significantly express the subject reflected by the social video, such as video tags of "video number", "short video", and "hot", and the meaningless video tags may be manually counted, a predetermined video tag library is generated according to the counted meaningless video tags, and each video tag included in the video tag sequence is matched with the meaningless video tags in the predetermined video tag library one by one to determine the meaningless video tags in the video tag sequence.

Specifically, for each video tag contained in the video tag sequence, the video tag is taken as a whole as a character string, the character string is matched with a character string corresponding to a nonsense video tag in the video tag sequence, and only when the two character strings are completely consistent, the video tag is considered to be matched with the nonsense video tag in a preset video tag library.

And deleting the video tags matched with the video tags in the preset video tag library in the video tag sequence to generate a deleted video tag sequence. After the updated video tag sequence is obtained, the theme video tag can be selected from the updated video tag sequence based on the positions of the remaining video tags in the updated video tag sequence and the parameter values corresponding to the two factors, namely the sum of the playing times corresponding to the social video containing the video tag in the social platform.

In the technical solution of the embodiment shown in fig. 6, for all video tags corresponding to the target social video, meaningless video tags that cannot significantly express the subject reflected by the social video are deleted from the video tag sequence, so that compared with a method of directly finding out the subject video tags from all video tags included in the video tag sequence, unnecessary data processing is reduced, and the efficiency of finding out the subject video tags from the video tag sequence of the target social video can be improved.

In an embodiment of the present application, as shown in fig. 7, a specific flowchart of step S220 in an embodiment of the present application is schematically shown, and step S220 may specifically include the following steps S710 to S750, which are described in detail below.

In step S710, the number of videos of the social videos in the social platform including the video tag is determined, and the total number of videos of the social videos included in the social platform is determined.

In step S720, a ratio between the total number of videos of the social videos included in the social platform and the number of videos of the social videos determined to include the video tag in the social platform is calculated.

In step S730, based on the ratio, the importance between the video tag included in the video tag sequence and the target social video is determined, where the importance and the ratio are in a negative correlation relationship.

In step S740, the video tags having the importance degree smaller than the predetermined importance degree threshold are deleted from the video tag sequence, and an updated video tag sequence is generated.

In step S750, based on the positions of the remaining video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the remaining video tags in the social platform, a theme video tag is selected from the updated video tag sequence.

In one embodiment of the present application, among all video tags included in a video tag sequence, there may be video tags that also occur widely in other social videos, and then such video tags are of lower importance relative to the target social video, and are also less suitable as subject video tags, so that they may also be deleted from the video tag sequence.

Specifically, the number of video pieces of the social videos containing the video tags in the social platform and the total number of video pieces of the social videos contained in the social platform may be determined, a ratio between the total number of video pieces of the social videos contained in the social platform and the number of video pieces of the social videos containing the video tags in the social platform is calculated, and based on the ratio, the importance degree between the video tags contained in the video tag sequence and the target social video is determined.

Optionally, the ratio may be directly used as the importance between the video tag included in the video tag sequence and the target social video, or the logarithm of the ratio may be used as the importance between the video tag included in the video tag sequence and the target social video.

The importance degree is an evaluation value of the importance degree of the video tag relative to the target social video, and the greater the importance degree is, the higher the importance degree of the video tag relative to the target social video is. The larger the above ratio is, the higher the degree of the social video containing the video tag in the social platform is, and therefore the importance between the video tag contained in the video tag sequence and the target social video is lower, that is, the importance is in a negative correlation with the ratio.

And deleting the video tags with the importance degrees smaller than a preset importance degree threshold value in the video tag sequence, and generating an updated video tag sequence. After the updated video tag sequence is obtained, the theme video tag can be selected from the updated video tag sequence based on the positions of the remaining video tags in the updated video tag sequence and the parameter values corresponding to the two factors, namely the sum of the playing times corresponding to the social video containing the video tag in the social platform.

In the technical solution of the embodiment shown in fig. 7, for all video tags corresponding to the target social video, the video tags with a lower importance degree relative to the target social video are deleted from the video tag sequence, so that compared with a method of directly finding out the theme video tags from all the video tags included in the video tag sequence, unnecessary data processing is reduced, and the efficiency of finding out the theme video tags from the video tag sequence of the target social video can be improved.

Still referring to fig. 2, in step S230, a word vector corresponding to the video tag is generated based on the video tag, and a word vector corresponding to the topic video tag is generated based on the topic video tag.

In one embodiment, after determining the video tag sequence of the target social video, a word vector corresponding to a video tag may be generated based on the video tag, and a word vector corresponding to a topic video tag may be generated based on the topic video tag, respectively. The word vector corresponding to the video label and the word vector corresponding to the theme video label are vectors with the same vector dimension.

Optionally, when generating the Word vector corresponding to the video tag and the Word vector corresponding to the theme video tag, the generation may be specifically implemented by using a pre-trained machine learning model, where the machine learning model may be a Word2vec Word vector calculation model, or a GloVe Word vector model, and the like, and is not limited herein.

In step S240, a similarity between the average value of the word vectors corresponding to the video tags and the average value of the word vectors corresponding to the subject video tags is calculated.

In an embodiment of the application, since a plurality of word vectors corresponding to the video tags and a plurality of word vectors corresponding to the theme video tags are obtained through calculation, an average value of the word vectors corresponding to the video tags and an average value of the word vectors corresponding to the theme video tags can be calculated, and the similarity between the two values can be calculated.

There are various ways to calculate the similarity between the average value of the word vectors corresponding to the video tags and the average value of the word vectors corresponding to the subject video tags, and a cosine similarity formula or an euclidean distance formula may be used to calculate the similarity between the average value of the word vectors corresponding to the video tags and the average value of the word vectors corresponding to the subject video tags.

In step S250, based on the similarity, a topic drift result between the video tag included in the video tag sequence and the target social video is determined.

In an embodiment of the application, the higher the similarity between the average value of the word vectors corresponding to the video tags and the average value of the word vectors corresponding to the topic video tags, the higher the matching degree between the video tags included in the video tag sequence of the target social video and the topics reflected by the target social video is, and it may be determined that there is no topic drift between the video tags included in the video tag sequence and the target social video.

In an embodiment of the present application, step S250 may specifically include: if the similarity is lower than a preset similarity threshold, determining that theme drift is generated between the video tags contained in the video tag sequence and the target social video; and if the similarity is higher than or equal to a preset similarity threshold, determining that no theme shift is generated between the video tags contained in the video tag sequence and the target social video.

If the similarity is lower than a preset similarity threshold, determining that theme drift is generated between the video tags contained in the video tag sequence and the target social video; and if the similarity is higher than or equal to a preset similarity threshold, determining that no theme shift is generated between the video tags contained in the video tag sequence and the target social video.

As can be seen from the above, by selecting the theme video tags that accord with the theme reflected by the social video in the video tag sequence based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video including the video tag in the social platform, and by calculating the similarity between the average value of the word vectors corresponding to the video tag and the average value of the word vectors corresponding to the theme video tags, the matching situation between the video tag included in the video tag sequence and the theme reflected by the social video is determined according to the similarity, thereby realizing accurate detection of the theme drift situation existing between the theme reflected by the social video and the video tag of the social video.

In an embodiment of the present application, the method for processing a video tag in the present embodiment may further include: if the similarity is lower than a preset similarity threshold, the target social video is added to a search blacklist which cannot be searched by the social users.

In this embodiment, if the similarity is lower than the predetermined similarity threshold, it is determined that significant topic drift occurs between the video tag included in the video tag sequence corresponding to the target social video and the reflected topic of the target social video, and the target social video may be added to a search blacklist that cannot be searched by the social user, so that the user does not find the target social video when searching through the video tag included in the target social video.

By adding the social videos with the video tags and the topics which are reflected by the social videos and are seriously unmatched to the search blacklist which cannot be searched by the social users, the probability that the social videos with the video tags matched with the topics reflected by the social videos are fed back to the users when the users search through the video tags is improved.

The following describes embodiments of the apparatus of the present application, which may be used to perform the method for processing the video tag in the above embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method for processing the video tag described above in the present application.

Fig. 8 shows a block diagram of a processing device of a video tag according to an embodiment of the application.

Referring to fig. 8, a device 800 for processing a video tag according to an embodiment of the present application includes: an obtaining unit 810, a selecting unit 820, a generating unit 830, a calculating unit 840 and an executing unit 850. The obtaining unit 810 is configured to obtain a video tag sequence of a target social video, where the video tag sequence includes a plurality of video tags; a selecting unit 820, configured to select a theme video tag from the video tag sequence based on a position of the video tag in the video tag sequence and a sum of playing times corresponding to a social video including the video tag in a social platform; a generating unit 830, configured to generate a word vector corresponding to the video tag based on the video tag, and generate a word vector corresponding to the theme video tag based on the theme video tag; a calculating unit 840, configured to calculate a similarity between an average value of the word vectors corresponding to the video tags and an average value of the word vectors corresponding to the subject video tags; an executing unit 850, configured to determine a topic shift result between the video tag included in the video tag sequence and the target social video based on the similarity.

In some embodiments of the present application, based on the foregoing scheme, the selecting unit 820 is configured to: determining the number of video pieces of the social video containing the video tag in the social platform; deleting the video tags of which the number of the video tags is less than a preset threshold value of the number of the video tags in the video tag sequence, and generating an updated video tag sequence; and selecting the theme video tag from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

In some embodiments of the present application, based on the foregoing scheme, the selecting unit 820 is configured to: matching the video tags with video tags in a predetermined video tag library, wherein the predetermined video tag library is a video tag library containing meaningless video tags; deleting the video tags matched with the video tags in a preset video tag library from the video tag sequence to generate an updated video tag sequence; and selecting the theme video tag from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

In some embodiments of the present application, based on the foregoing scheme, the selecting unit 820 is configured to: determining the number of videos of the social videos containing the video tags in the social platform, and determining the total number of videos of the social videos contained in the social platform; calculating the ratio of the total number of videos of the social videos contained in the social platform to the number of videos of the social videos containing the video tags in the social platform; determining the importance degree between the video tags contained in the video tag sequence and the target social video based on the ratio, wherein the importance degree and the ratio are in a negative correlation relationship; deleting the video tags with the importance degrees smaller than a preset importance degree threshold value in the video tag sequence to generate an updated video tag sequence; and selecting the theme video tag from the updated video tag sequence based on the positions of the rest video tags in the updated video tag sequence and the sum of the playing times corresponding to the social videos containing the rest video tags in the social platform.

In some embodiments of the present application, based on the foregoing scheme, the selecting unit 820 is configured to: determining a topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence and the sum of the playing times corresponding to the social video containing the video tag in the social platform, wherein the sequence number corresponding to the topic matching degree and the position is in a negative correlation relationship, and the topic matching degree and the sum of the playing times corresponding to the social video containing the video tag in the social platform are in a positive correlation relationship; and selecting a subject video tag from the video tag sequence based on the subject matching degree between the video tag and the target social video.

In some embodiments of the present application, based on the foregoing scheme, the selecting unit 820 is configured to: determining the topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence, the sum of the playing times corresponding to the social video containing the video tag in the social platform and the sum of the number of comments of the social video containing the video tag in the social platform, wherein the topic matching degree and the sum of the number of comments are in positive correlation.

In some embodiments of the present application, based on the foregoing scheme, the selecting unit 820 is configured to: calculating the sum of the playing times corresponding to the social videos containing the video tags in the social platform and the weighted sum of the comment numbers of the social videos containing the video tags in the social platform; determining a topic matching degree between the video tag and the target social video based on the position of the video tag in the video tag sequence and the weighted sum.

In some embodiments of the present application, based on the foregoing solution, the execution unit 850 is configured to: if the similarity is lower than a preset similarity threshold, determining that theme drift is generated between the video tags contained in the video tag sequence and the target social video; and if the similarity is higher than or equal to a preset similarity threshold, determining that no theme shift is generated between the video tags contained in the video tag sequence and the target social video.

In some embodiments of the present application, based on the foregoing solution, the processing apparatus of the video tag further includes: and the adding unit is used for adding the target social video to a search blacklist which cannot be searched by a social user if the similarity is lower than a preset similarity threshold.

FIG. 9 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.

It should be noted that the computer system 900 of the electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments.

As shown in fig. 9, the computer system 900 includes a Central Processing Unit (CPU)901, which can perform various appropriate actions and processes, such as executing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 902 or a program loaded from a storage portion 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for system operation are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An Input/Output (I/O) interface 905 is also connected to bus 904.

The following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 908 including a hard disk and the like; and a communication section 909 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.

In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 901.

It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.

As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.

It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.

Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.

It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:标签识别方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!