Video classification method and device, electronic equipment and computer-readable storage medium
1. A video classification method, characterized in that the video classification method comprises:
acquiring character information of a target video;
determining the type of the target video according to the text information,
the text information comprises first text information and second text information, wherein the first text information is obtained based on a cover frame of the target video, and the second text information is obtained by converting audio data of the target video.
2. The method for video classification according to claim 1, wherein said determining the type of the target video according to the text information comprises:
determining the type of the target video according to the first text information;
when the type of the target video cannot be determined according to the first text information, determining the type of the target video according to the first text information and the second text information or determining the type of the target video according to the second text information.
3. The method for video classification according to claim 1, wherein said determining the type of the target video according to the text information comprises:
determining the type of the target video according to the second text information;
and when the type of the target video cannot be determined according to the second text information, determining the type of the target video according to the first text information and the second text information or determining the type of the target video according to the first text information.
4. The method for video classification according to claim 2 or 3, wherein said determining the type of the target video according to the first text information comprises:
if the first text information contains target keywords and the number of the target keywords contained in the first text information is greater than or equal to a set amount, determining that the type of the target video is a content expansion type, wherein the target keywords are used for indicating the video of the content expansion type.
5. The method for video classification according to claim 4, wherein said determining the type of the target video according to the second text information comprises:
extracting feature data of the second character information, wherein the feature data comprise a person-named pronoun feature and a speech speed feature;
and determining whether the type of the target video is a content display type or not according to the characteristic data.
6. The video classification method according to claim 5, wherein the determining whether the type of the target video is a content presentation class according to the feature data comprises:
if the person-named pronoun feature and the speech speed feature simultaneously meet a conversation condition, determining the type of the target video as the content display class;
and if the person name pronoun feature and the speech speed feature do not meet the conversation condition, determining the type of the target video as the content expansion type or the third-party explanation type.
7. A video classification apparatus, characterized in that the video classification apparatus comprises:
an acquisition unit configured to: acquiring character information of a target video;
a determination unit configured to: determining the type of the target video according to the text information,
the text information comprises first text information and second text information, wherein the first text information is obtained based on a cover frame of the target video, and the second text information is obtained by converting audio data of the target video.
8. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video classification method of any of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by at least one processor, cause the at least one processor to perform the video classification method of any of claims 1 to 6.
10. A computer program product comprising computer instructions, wherein the computer instructions, when executed by at least one processor, implement the video classification method of any of claims 1 to 6.
Background
Short video is rapidly becoming popular worldwide by virtue of its short and fast content consumption model. By the end of 2020, nearly half of the world's netizens ever downloaded or used too short video platforms. Among a plurality of short video contents, the movie (album) type short video conforms to the entertainment requirements of the public and is deeply favored by users, and the movie (album) type short video also contains a plurality of different contents, so that the short videos need to be reasonably classified if accurate user recommendation is realized.
In the related art, many classification methods applied to short videos exist, but generally, an image analysis method based on deep learning is adopted, and frames of the short videos need to be extracted, and then the extracted single-frame or multi-frame images are analyzed to finally obtain a classification result. However, the picture data from the frames cannot fully express the content of the video, and the disparity between the picture domain information and the video domain information reduces the classification accuracy. Meanwhile, the image analysis based on the deep learning has large consumption of computing resources, and is not convenient for popularization and application.
Disclosure of Invention
The present disclosure provides a video classification method, apparatus, electronic device, and computer-readable storage medium, so as to solve at least the problems of low classification accuracy and inconvenience in popularization and application in the related art, and may not solve any of the above problems.
According to a first aspect of the present disclosure, there is provided a video classification method, including: acquiring character information of a target video; determining the type of the target video according to the text information, wherein the text information comprises first text information and second text information, the first text information is obtained based on a cover frame of the target video, and the second text information is obtained by converting audio data of the target video.
Optionally, the determining the type of the target video according to the text information includes: determining the type of the target video according to the first text information; when the type of the target video cannot be determined according to the first text information, determining the type of the target video according to the first text information and the second text information or determining the type of the target video according to the second text information.
Optionally, the determining the type of the target video according to the text information includes: determining the type of the target video according to the second text information; and when the type of the target video cannot be determined according to the second text information, determining the type of the target video according to the first text information and the second text information or determining the type of the target video according to the first text information.
Optionally, the determining the type of the target video according to the first text information includes: if the first text information contains target keywords and the number of the target keywords contained in the first text information is greater than or equal to a set amount, determining that the type of the target video is a content expansion type, wherein the target keywords are used for indicating the video of the content expansion type.
Optionally, the determining the type of the target video according to the second text information includes: extracting feature data of the second character information, wherein the feature data comprise a person-named pronoun feature and a speech speed feature; and determining whether the type of the target video is a content display type or not according to the characteristic data.
Optionally, the determining, according to the feature data, whether the type of the target video is a content presentation class includes: if the person-named pronoun feature and the speech speed feature simultaneously meet a conversation condition, determining the type of the target video as the content display class; and if the person name pronoun feature and the speech speed feature do not meet the conversation condition, determining the type of the target video as the content expansion type or the third-party explanation type.
Optionally, when the target video includes at least two videos, if the person pronoun feature and the speech rate feature simultaneously satisfy a dialog condition, determining that the type of the target video is the content presentation class includes: and if the person-named pronoun feature and the speech speed feature of the at least two videos simultaneously meet the conversation condition, determining the type of the target video as the content display class.
Optionally, the determining, according to the feature data, whether the type of the target video is a content presentation class includes: determining a dialogue value of the target video according to the feature data, the threshold value of the feature data and the weight of the feature data; and determining whether the type of the target video is a content display type or not according to the relation between the conversation value and the conversation threshold value.
Optionally, the target video includes at least one video, where when the target video includes a plurality of videos, the dialog value of the target video is a statistical value of the dialog values of the plurality of videos.
Optionally, the extracting feature data of the second text information, where the feature data includes a person-to-speak feature and a speech rate feature, includes: counting the number of first and second person pronouns appearing in a specified period of the second text information as the person pronoun feature; and counting the duration of the continuous appearance of the text in a specified period of time in the second character information, and/or counting the ratio of the number of words of the text to the duration of the continuous appearance of the text in the specified period of time as the speech rate characteristic.
According to a second aspect of the present disclosure, there is provided a video classification apparatus comprising: an acquisition unit configured to: acquiring character information of a target video; a determination unit configured to: determining the type of the target video according to the text information, wherein the text information comprises first text information and second text information, the first text information is obtained based on a cover frame of the target video, and the second text information is obtained by converting audio data of the target video.
Optionally, the determining unit is further configured to: determining the type of the target video according to the first text information; when the type of the target video cannot be determined according to the first text information, determining the type of the target video according to the first text information and the second text information or determining the type of the target video according to the second text information.
Optionally, the determining unit is further configured to: determining the type of the target video according to the second text information; and when the type of the target video cannot be determined according to the second text information, determining the type of the target video according to the first text information and the second text information or determining the type of the target video according to the first text information.
Optionally, the determining unit is further configured to: if the first text information contains target keywords and the number of the target keywords contained in the first text information is greater than or equal to a set amount, determining that the type of the target video is a content expansion type, wherein the target keywords are used for indicating the video of the content expansion type.
Optionally, the determining unit is further configured to: extracting feature data of the second character information, wherein the feature data comprise a person-named pronoun feature and a speech speed feature; and determining whether the type of the target video is a content display type or not according to the characteristic data.
Optionally, the determining unit is further configured to: if the person-named pronoun feature and the speech speed feature simultaneously meet a conversation condition, determining the type of the target video as a content display type; and if the person name pronoun feature and the speech speed feature do not meet the conversation condition, determining the type of the target video as the content expansion type or the third-party explanation type.
Optionally, when the target video includes at least two videos, the determining unit is further configured to: and if the person-named pronoun feature and the speech speed feature of the at least two videos simultaneously meet the conversation condition, determining the type of the target video as a content display type.
Optionally, the determining unit is further configured to: determining a dialogue value of the target video according to the feature data, the threshold value of the feature data and the weight of the feature data; and determining whether the type of the target video is the content display class or not according to the relation between the conversation value and the conversation threshold value.
Optionally, the target video includes at least one video, where when the target video includes a plurality of videos, the dialog value of the target video is a statistical value of the dialog values of the plurality of videos.
Optionally, the determining unit is further configured to: counting the number of first and second person pronouns appearing in a specified period of the second text information as the person pronoun feature; and counting the duration of the continuous appearance of the text in a specified period of time in the second character information, and/or counting the ratio of the number of words of the text to the duration of the continuous appearance of the text in the specified period of time as the speech rate characteristic.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a video classification method as described above.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by at least one processor, cause the at least one processor to perform the video classification method as described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by at least one processor, implement the video classification method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the video classification method and the video classification device, the image data is not directly used, but the text data is used as the input parameter of the video classification, the information content of the video domain can be efficiently reflected, the problem that the information of the image domain is inconsistent with that of the video domain is solved, and the accuracy of the classification result is improved. And the data volume of the text information is small, the analysis is convenient, the data processing amount can be sufficiently reduced, the memory occupation can be reduced, the processing speed is increased, and the popularization of the application scenes of the video classification method and the video classification device is facilitated.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart illustrating a video classification method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flow diagram illustrating a video classification method according to a specific embodiment of the present disclosure.
Fig. 3 is a block diagram illustrating a video classification apparatus according to an exemplary embodiment of the present disclosure.
Fig. 4 is a block diagram of an electronic device according to an example embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following examples do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In this case, the expression "at least one of the items" in the present disclosure means a case where three types of parallel expressions "any one of the items", "a combination of any plural ones of the items", and "the entirety of the items" are included. For example, "include at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; (3) including a and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; (3) and executing the step one and the step two.
The short videos of film and television (collection) type meet the entertainment requirements of the public and are popular with users. The concept of collection is mainly used in short video platforms, and refers to a collection in which the same content is divided into multiple collections due to the limitation of video length.
There are many classification methods applied to short videos in the related art, which benefit from the development of deep learning, especially in the field of computer vision, and these methods generally employ convolutional neural networks as classifiers. In the concrete implementation, the short videos need to be subjected to frame extraction, and then extracted single-frame or multi-frame pictures are used as input data of a classifier to finally obtain a classification result.
Such processes have the following major disadvantages:
1. indicating poor consistency. The visual modality data input by the existing scheme classifier is a picture, and the short video content carrier is a video. Compared with a complete video, the information contained in the image data obtained by frame extraction is limited and cannot be completely expressed. This disparity in picture domain and video domain information will severely impact the performance of the classifier.
2. The model complexity is high. The application scenes of a certain method can be more common due to the lower model complexity and the higher reasoning speed, and although the deep learning algorithm has strong fitting capacity and can greatly improve the accuracy of the classification problem, the consumption of the deep learning algorithm on computing resources cannot be ignored. For example, the multi-layer head attention network adopted in the related art has a severe computational demand, which greatly increases the cost of model deployment.
3. The data marking amount is large. Deep learning is typically based on a data-driven approach, with the amount of data determining the upper limit of the model. In order to obtain better performance, enterprises generally need to spend a great deal of manpower and financial resources to clean and label data.
The video classification method provided by the disclosure is mainly applied to resource type division of the movie and television collection under a search scene. Unlike the existing short video classification method which often uses tens of hundreds of classification systems, the use scene of the present disclosure only needs to divide the film and television short videos into three types of commentary, clipping and catkin. When a user inputs a keyword of a certain movie IP, the platform can preferentially display commentary, clipping or floc according to the user image so as to meet the fine-grained requirement of the user in a search recommendation scene. Preferentially showing or pushing the comment type video for users who like to see the comment type; and after seeing the explanation type of the same IP, pushing the collection of other resource types.
These three classes are defined as follows:
and the narration class is the analysis and explanation of the short video producer to the movie and television drama so as to facilitate the user to quickly understand the movie and drama and the thought that the director wants to convey.
The clip class, as the name implies, clips the movie highlight segment and directly presents the clip to the user. No producer pictures or sounds appear except for additional background music.
The floral foam mainly comprises floral foam generated in the process of film shooting, cold knowledge contained in movies and videos or colored eggs.
A short video usually contains data of multiple modalities such as images, text, and voice. How to mine effective data from numerous data is the focus of the disclosure on dividing movie (album) class short videos by resource types. The core of the video classification method provided by the present disclosure is that the pictures as visual data cannot well realize the partition of movie (album) resource types, because the visual difference between the comment class and the clip class is almost the same. The fundamental difference between each other is that the narration class and the floes class introduce the author's own voice in the video. The commentary and the floral styles are videos that are similar to the sound of the author, and the main difference is that the cover of the floral styles generally includes key words such as "floral styles", "behind-the-scenes cold knowledge", and the like. Text information and voice data of the incoming video cover are selected as input data.
Next, a video classification method and apparatus according to an exemplary embodiment of the present disclosure will be described in detail with reference to fig. 1 to 4, taking the division of movie (album) short videos into three categories, i.e., an explanation category, a clip category, and a catwalk category as an example. It is understood that, in practical applications, videos with similar characteristics other than the movie (album) type short videos may also be classified according to the video classification method provided by the present disclosure, and accordingly, the videos with similar characteristics may be divided into a third party commentary type (representing commentary of an author on specific content, such as the aforementioned commentary type), a content presentation type (representing direct presentation of specific content in a certain form, such as the aforementioned clip type representing a movie in a clip form), and a content expansion type (representing presentation of related content of the specific content, such as the aforementioned festoons).
Fig. 1 is a flowchart illustrating a video classification method according to an exemplary embodiment of the present disclosure.
Referring to fig. 1, in step 101, text information of a target video is acquired. Specifically, the text information includes first text information and second text information.
The first text information is obtained based on the cover frame of the target video, for example, Optical Character Recognition (OCR) may be used to extract the first text information, and a specific implementation of the OCR is not discussed in the embodiments of the present disclosure. The cover frame usually contains the title of the target video, so that the content of the target video can be more accurately reflected, and the cover frame can be used for identifying the video of the catkin type. By acquiring the first text information of the cover frame, only the picture of the cover frame with the reference value needs to be processed, the data processing amount can be greatly reduced, only the text information in the picture needs to be extracted, and the picture information with the small reference value is not analyzed, so that the data processing amount can be further reduced, and the data processing efficiency is improved.
The second text information is obtained by converting the audio data of the target video. The audio data may reflect whether or not the sound of the author is introduced, and if the sound of the author is introduced, the type of the target video may be considered as a floc or a commentary class, and if the sound of the author is not introduced, the type of the target video may be considered as a clip class, and thus can be used to identify the clip class. In movie (album) type short video, the speech signal mainly consists of a series of sounds such as author's sound, intra-drama character conversation, intra-drama scene dubbing, background music added by the author, etc., wherein the intra-drama sound effect and the background music belong to noise signals, which seriously impair the performance of the classifier. By converting the audio data into the second character information, the redundancy of the input signal can be reduced while suppressing noise, and the meaning of the text can be analyzed without analyzing the tone of the speaker, thereby reducing the analysis difficulty and the data processing amount. The second text information may be obtained by, for example, Automatic Speech Recognition (ASR), and specific implementation of the ASR is not discussed in the embodiments of the present disclosure.
Furthermore, subtitle information in the target video can be extracted to supplement the second text information. Specifically, the audio data may be converted into text information, the subtitle information in the target video may be extracted, and the superimposition information of the converted text information and the subtitle information may be used as the second text information. It is understood that if the subtitle information is not extracted, the text information converted from the audio data is directly used as the second text information.
No matter the first character information or the second character information, the information content of the video domain can be efficiently reflected, the problem that the performance of the classifier is seriously influenced by the inconsistency of the image domain information and the video domain information in the related technology is solved, and the accuracy of the classification result is improved. And the data volume of the text information is small, the analysis is convenient, the data processing amount can be sufficiently reduced, the memory occupation can be reduced, and the processing speed is increased, so that the application scene of the video classification method disclosed by the invention is promoted.
In step 102, the type of the target video is determined according to the text information. Specifically, it may be determined whether the type of the target video is a catkin type only according to the first text information, and at this time, only the first text information may be obtained correspondingly in step 101, so as to reduce the data processing amount; or, determining whether the type of the target video is a clip type only according to the second text information, and then in step 101, only the second text information can be acquired correspondingly to reduce the data processing amount; of course, the specific type of the target video can also be specified by combining the first text information and the second text information. It can be understood that, for the first two cases, the first judgment is performed only by the first text information or the second text information, and the third case may be that the first judgment is performed by one of the first text information and the second text information, and then whether to perform the second judgment and how to perform the second judgment are determined by combining the result of the first judgment, where the specific method for determining the judgment by using the first text information or the second text information may be general, which is an implementation manner of the present disclosure and falls within the protection scope of the present disclosure.
For the scheme of using the first text information and the second text information simultaneously, in some embodiments, step 102 optionally first includes: and determining the type of the target video according to the first text information. Namely, the first judgment is firstly carried out by means of the first character information. Since the data size of the first character information is small, the first character information can be preferentially processed. If the type of the target video can be determined according to the first character information, the second character information does not need to be processed continuously, the calculation load is reduced, and the data processing efficiency is improved.
Further, at this time, the step 101 may be further disassembled, the first text information is obtained first, and the second text information may not be obtained any more under the condition that the second text information does not need to be continuously processed, that is, the audio data does not need to be obtained any more, so that the data processing amount may be sufficiently reduced. Of course, the first text information and the audio data may also be obtained in advance, and then it is determined whether the audio data needs to be converted into the second text information as needed. The first text information and the second text information can be obtained in advance, which is an implementation manner of the disclosure and falls within the protection scope of the disclosure.
Specifically, a plurality of target keywords for indicating a content extension type video may be selected first, the first text information is compared with a plurality of preset target keywords, if the first text information includes the target keywords and the number of the target keywords included in the first text information is greater than or equal to a set amount, the type of the target video is determined to be the content extension type, otherwise, the specific type of the target video is determined to be temporarily unable to be determined, and further determination is required. For the case where the content expansion class is a floral class, the target keywords may include, for example, floral, cold knowledge, behind scenes, before opening photos, and colored eggs.
The set amount may be specifically set to adjust the severity of the classification, and may be set to 1, for example, so that the type of the target video is considered to be a feature type as long as the first text information includes the target keyword. Specifically, the target video may include at least one video (e.g., a short video), and accordingly, the first text information is obtained based on a cover frame of the at least one video. In other words, the target video may be a single video or a collection of multiple videos. When the target video is a collection, the first text information is a collection of text information obtained based on cover frames of all short videos in the collection. For example, the cover text of a short video of a certain album is "JiuPing sesame officer" cold knowledge behind the screen "(i.e. first text information), wherein the matching between the" cold knowledge "and the" behind the screen "is successful, the number of the target keywords contained in the first text information is 2, and if the number is greater than the set amount 1, the album is predicted to be a catkin.
Alternatively, the set amount may be a fixed value to simplify the scheme, reduce the amount of calculation, and reduce the recognition omission of the floc-like short video. Still taking the setting amount equal to 1 as an example, as long as one video in the collection contains the target keyword, the type of the collection is determined to be a catkin type. The setting amount can also adjust specific values according to the number of videos contained in the target video, for example, the setting amount is set in a grading mode, and the more the number of videos contained in the target video is, the larger the setting amount is, so as to reduce misjudgment. In this case, by appropriately setting the set amount, it is possible to determine that the type of a collection containing only a very small amount of videos of the flocs is not the flocs.
Alternatively, when the type of the target video cannot be determined as a result of the first judgment, two situations may occur, namely, the type of the target video cannot be determined to be a floc, and a conclusion that the type of the target video is not a floc cannot be obtained temporarily. For the first case, step 102 further comprises: and determining the type of the target video according to the first text information and the second text information. That is, it may be determined that the type of the target video is not a floc type but a clip type or an analysis type according to the first text information, and then it may be determined whether the type of the target video is a clip type or an analysis type according to the second text information. In the second case, for example, when the set amount is greater than or equal to 2, if the first text information includes one target keyword, it is not possible to conclude that the type of the target video is the feature type, but since the target keyword is included, there is a possibility that the type of the target video is the feature type, and in this case, the determination result may be made to be consistent with the first case, and the second text information may be further analyzed. If further analysis is performed, according to the actual situation, it may be determined that the type of the target video is a clip type directly according to the second text information, that is, step 102 further includes: determining the type of the target video according to the second character information; it may also be determined that the type of the target video is not a clip type, but a comment type or a floc type, and then it needs to be determined that the type of the target video is a floc type by combining the information that the first text information includes a target keyword, that is, step 102 further includes: and determining the type of the target video according to the first text information and the second text information.
In some embodiments, optionally, determining the type of the target video according to the second text information includes: extracting feature data of the second character information, wherein the feature data comprise a person-named pronoun feature and a speech speed feature; from the characterization data, it is determined whether the type of the target video is a content presentation class (e.g., a clip class). Compared with clip-type videos, the author of the comment-type video has the advantages that in order to explain the movie scenario in a few minutes, the speed of speech is high, the sound of the author is introduced into the catwalk-type video, and the speed of speech is high and normal; in addition, explanation and batting are generally carried out at the perspective of a third person, the situation that the pronouns of the first person and the second person are rarely adopted during explanation, most characters in the video sound appear in a dialogue mode, and the pronouns of the first person and the second person are frequently used. Based on this, determining whether the type of the target video is a content display class according to the feature data may specifically include: if the person-named pronoun feature and the speech speed feature simultaneously meet the conversation condition, determining the type of the target video as a content display type; and if the person-name pronoun feature and the speech speed feature do not meet the conversation condition, determining the type of the target video as a content expansion class or a third-party explanation class, wherein the conversation condition means that a certain amount of first and second person-name pronouns exist in the second text information, and the speech speed corresponding to the second text information is slower and smaller than the set speech speed. For the case that the part of the person-named pronoun feature and the speech speed feature meets the conversation condition, the person-named pronoun feature and the speech speed feature can be considered as not meeting the conversation condition, the person-named pronoun feature and the speech speed feature can also be considered as meeting the conversation condition, the person-named pronoun feature and the speech speed feature can be further combined with the first character information for analysis, and the. Specifically, how to judge whether the feature data meets the dialogue condition or not can be judged, the person pronoun feature and the speech speed feature can be quantized and then whether the dialogue condition is met or not can be determined according to the magnitude relation between the dialogue value obtained by quantization and the dialogue threshold value, during quantization, the quantization threshold values can be respectively set for the person pronoun feature and the speech speed feature, for example, the number of the first person pronoun and the number of the second person pronoun need to reach the set threshold value, the speech speed needs to reach the set speech speed, the person pronoun feature and the speech speed feature can be respectively analyzed, and then the person pronoun feature and the speech speed feature can be collected after quantization to obtain the dialogue value, and whether the dialogue condition is met or not can be judged by analyzing the dialogue value. Taking the case of quantitative summarization as an example, for example, if the dialog value is greater than or equal to the dialog threshold value, the person-called pronoun feature and the speech rate feature are considered to simultaneously satisfy the dialog condition, the target video is determined to be a clip-type video, otherwise, the person-called pronoun feature and the speech rate feature are considered to not satisfy the dialog condition, and the target video is determined not to be a clip-type video but to be an explanation-type video or a catkin-type video; if the conversation threshold value includes a first conversation threshold value and a second conversation threshold value, the first conversation threshold value is larger than the second conversation threshold value, the person pronouncing feature and the speech speed feature are considered to simultaneously meet the conversation condition if the conversation value is larger than or equal to the larger first conversation threshold value, the target video is determined to be a clip-type video, the person pronouncing feature and the speech speed feature are considered to not meet the conversation condition if the conversation value is smaller than the second conversation threshold value, the target video portion is determined not to be the clip-type video but to be an explanation-type video or a flower-like video, and when the conversation value is smaller than the first conversation threshold value and larger than or equal to the second conversation threshold value, the person pronouncing feature and the speech speed feature portion are considered to meet the conversation condition, and the analysis can be further combined with the first text information. This is the implementation of the present exemplary embodiment and is not limiting herein. By extracting the person-name pronouncing characteristics and the speech speed characteristics in the second character information as characteristic data, the editing videos can be reliably distinguished from the explaining videos and the catkin videos.
Further, when the target video includes at least two videos, if the person's pronoun feature and the speech rate feature simultaneously satisfy the dialogue condition, determining that the type of the target video is a content display type, including: and if the person-named pronoun feature and the speech speed feature of at least two videos simultaneously meet the conversation condition, determining the type of the target video as a content display type. By comprehensively analyzing all at least two videos contained in the target video, the accuracy of classification is improved. Specifically, for the case that the target video includes at least two videos, the dialog condition may be appropriately adjusted, for example, each video may individually satisfy the dialog condition for a single video, or may ensure that a certain proportion of videos satisfy the dialog condition, or may determine a statistical value of quantization results of at least two videos after quantizing the person-named pronouncing feature and the speech rate feature, determine the type of the target video according to the statistical value, and specifically refer to the foregoing quantization analysis method for a single video.
To summarize, when the type of the target video cannot be determined from the first text information, there are four cases: firstly, determining that the target video is not a floc type according to the first text information, determining that the type of the target video is a clip type or a comment type according to the first text information, and then determining that the type of the target video is specifically the clip type or the comment type according to the second text information. Secondly, no conclusion can be drawn according to the first text information, but the target video is determined to be a clip type according to the second text information. Third, no conclusion can be drawn from the first textual information, but the target video is not floculation-like, as in the first case. Fourthly, according to the first character information, if no conclusion can be drawn, the target video is determined not to be a clip type according to the second character information, and then the target video is determined to be a flower type by combining the situation that the conclusion cannot be drawn according to the first character information and the situation that the target video is a flower type video exists.
For schemes that use both the first textual information and the second textual information, in other embodiments, optionally, step 102 includes: determining the type of the target video according to the second character information; and when the type of the target video cannot be determined according to the second text information, determining the type of the target video according to the first text information and the second text information or determining the type of the target video according to the first text information. Specifically, it may be determined whether the type of the target video is a clip type according to the second text information, if so, the determination is completed, and if not, further analysis is required, which includes the following four cases: firstly, determining that the target video is not a clip type according to the second text information, determining that the type of the target video is a comment type or a flower type according to the second text information, and then determining that the type of the target video is specifically a comment type or a flower type according to the first text information. Secondly, no conclusion can be drawn according to the second text information, for example, a large number of first and second person pronouns exist in the second text information, but the speech speed is high, or only a few first and second person pronouns exist, but the speech speed is normal, and the conversation condition is partially met, so that the target video is possible to be a clip-type video, but still the target video is regarded as a first case, and the first case is processed; thirdly, no conclusion can be drawn according to the second text information, the possibility that the target video is a clip-type video exists, but the target video is determined to be a floc-type video according to the first text information. Fourthly, according to the first character information, no conclusion can be drawn, if the target video is possible to be the clip video, the target video is determined not to be the floc video according to the first character information, and then the target video is determined to be the clip video according to the situation that the conclusion cannot be drawn according to the second character information, and the target video is possible to be the clip video.
Fig. 2 is a flow diagram illustrating a video classification method according to a specific embodiment of the present disclosure.
Referring to fig. 2, an implementation flow of the video classification method of the present disclosure mainly includes three parts, namely data acquisition and processing, type judgment, and result output. First, key word matching is carried out on first character information acquired by a cover frame OCR, and therefore whether the type of a target video is a floc type or not is determined. If the matching fails, namely the target video is not in the catkin category, performing feature extraction on second character information converted from the audio data ASR to obtain feature data comprising the person's name pronouncing feature and the speech speed feature, inputting the feature data into a classifier, and outputting a judgment result of the type of the target video by the classifier.
Specifically, when the feature extraction is performed, the number of the first and second person pronouns appearing in a specified period in the second text information may be counted as the person pronoun feature. The more the first and second person names, the less likely the target video is in the comment class and the more likely it is in the clip class. Table 1 gives examples of the pronouns of the first and second persons.
TABLE 1 examples of first and second person pronouncing
The speech rate characteristic includes at least one of text duration and text density.
When the feature extraction is performed, the duration of the continuous appearance of the text in the specified time period in the second character information can be counted to be used as the duration of the text. The text duration can reflect whether the target video has a speech for a long time, and the larger the text duration is, the higher the possibility that the target video is a comment class is. For example, for a short video with a total duration of 30 seconds, the first 10 seconds are spoken (dialog or monologue), and only the first 10 seconds of audio have a textual result in ASR, corresponding to a text duration dt @30 ═ 10. For the condition that the pause occurs in the conversation, the conversation can be divided into different time segments according to the pause, and then the total length of the time segments is counted. For example, "I have a friend. A pause occurs between "there" and "one", and the actual ASR results are as follows:
the total text duration is (0.3-0) + (1.7-1.2) ═ 0.8 (sec), dt @1 ═ 0.3 for 1 sec, and dt 1.5 ═ 0.3+ (1.5-1.2) — (0.6) for 1.5 sec.
When feature extraction is performed, the ratio of the number of words of the text to the duration of the continuous appearance of the text in a specified time period can be counted to be used as the character density. The text density helps to fully reflect the speed of speech of the audio, and the faster the speed of speech, the higher the probability that the target video is of the narration class.
In some embodiments, specifically, determining whether the type of the target video is a third party parsing class or a content presentation class according to the feature data includes: calculating the score of the target video according to the characteristic data, the threshold value of the characteristic data and the weight of the characteristic data; and determining whether the type of the target video is a third-party parsing class or a content presentation class according to the score. By directly utilizing the characteristic data for scoring, a clear video type judgment standard can be provided. Specifically, comparing the score to a classification threshold, the type of target video may be determined. The video classification method provided by the disclosure can classify target videos based on a logic judgment method of a flow chart, and abandons the existing deep learning-based classification method, so that on one hand, the dependence on computing power is greatly reduced, and rapid prediction can be realized without a GPU, on the other hand, besides necessary evaluation data labeling, any data labeling requirement is not needed, and the cost is greatly reduced.
Specifically, the feature data is input into a two-classifier, and the two-classifier is used for finishing scoring and type judgment. For example, a binary classifier is represented by the following formula:
wherein xij,tijAnd wijRespectively representing the feature data, the threshold value of the feature data, and the weight of the feature data. The sign function is:
that is, the value of the sign function is 1 when the characteristic data is greater than or equal to its threshold value, and 0 otherwise. The values of the sign function are then weighted and summed as a score. It is understood that the sign function can also be expanded to other functions, for example, the ratio of the difference obtained by subtracting the characteristic threshold from the characteristic data to the characteristic threshold, as long as the relationship between the characteristic data and the threshold thereof can be embodied.
In addition, it can be understood that the second text information content of the narration video is more and the speed of speech is faster, therefore, the larger the value of the text duration or the text density, the higher the possibility that the target video is in the comment class, the lower the possibility that the target video is in the clip class, the opposite of the first and second personal expression numbers, therefore, the weights of the text duration and the character density can be made opposite to the positive and negative of the weights of the first and second personal number of words, that is, when the weights of the text duration and the character density are positive, the weights of the first and second personal number of words are made negative (when the score is the opposite number of the conversation value), when the weights of the text duration and the text density are negative values, the weights of the first and second human representative numbers are positive values (the score at this time is the conversation value), so that the score of the target video can be ensured to properly reflect the type of the target video.
Further, the difficulty in distinguishing the video types is increased because some commentators play a highlight at the beginning of the video (affecting the speech rate characteristics) or introduce themselves (affecting the person pronouncing characteristics). In this regard, the specified time periods in the feature data can be set as different time periods in the target video, and different thresholds and weights are set in different time periods, so that the feature data in different time periods can be counted, targeted analysis can be performed, and the accuracy of the judgment result can be improved. For example, for a short video with a total duration of 30 seconds, the feature data of the first 5 seconds, 10 seconds, 20 seconds and the total duration of the video can be counted respectively, and table 2 gives an example of the threshold and the weight of each feature data at different specified time periods.
In table 2, the weights of the first and second person term numbers are negative values, and the weights of the text density and the text density are positive values, so that the higher the score is, the higher the probability that the target video belongs to the comment class is.
TABLE 2 feature threshold and weight parameters
In some embodiments, for the scheme of using the first text information and the second text information at the same time, optionally, step 102 includes: determining the type of the target video according to the second character information; and when the type of the target video cannot be determined according to the second text information, determining the type of the target video according to the first text information. The embodiments may also implement determining the type of the target video by combining the first text information and the second text information, for example, determining whether the type of the target video is a content presentation class (e.g., a clip class) according to the second text information, and if the type of the target video is not the content presentation class, determining whether the type of the target video is a content extension class (e.g., a floc class) or a third-party parsing class (e.g., a narration class) according to the first text information. For a specific determination method, reference may be made to the foregoing embodiments, which are not described herein again.
In some embodiments, optionally, the target video comprises at least one video, wherein, when the target video comprises a plurality of videos, the score of the target video is a statistical value of the scores of the plurality of videos. As mentioned above, the target video may include at least one video (e.g., a short video), in other words, the target video may be a single video or a collection of multiple videos. When the target video is a collection including a plurality of videos, the influence of the number of videos included in the target video on the score can be reduced by calculating the statistical value of the scores of the plurality of videos included in the target video, and the accuracy of classification can be improved. Specifically, the statistical value is, for example, an average value, a median, a mode, or a median, as long as it can reflect the general score of a plurality of videos in the collection.
Fig. 3 is a block diagram illustrating a video classification apparatus according to an exemplary embodiment of the present disclosure.
Referring to fig. 3, a video classification apparatus 300 according to an exemplary embodiment of the present disclosure may include an acquisition unit 301 and a determination unit 302.
The acquisition unit 301 may acquire text information of the target video. Specifically, the text information includes first text information and second text information.
The first text information is obtained based on the cover frame of the target video, for example, Optical Character Recognition (OCR) may be used to extract the first text information, and a specific implementation of the OCR is not discussed in the embodiments of the present disclosure. The cover frame usually contains the title of the target video, so that the content of the target video can be more accurately reflected, and the cover frame can be used for identifying the video of the catkin type. By acquiring the first text information of the cover frame, only the picture of the cover frame with the reference value needs to be processed, the data processing amount can be greatly reduced, only the text information in the picture needs to be extracted, and the picture information with the small reference value is not analyzed, so that the data processing amount can be further reduced, and the data processing efficiency is improved.
The second text information is obtained by converting the audio data of the target video. The audio data may reflect whether or not the sound of the author is introduced, and if the sound of the author is introduced, the type of the target video may be considered as a floc or a commentary class, and if the sound of the author is not introduced, the type of the target video may be considered as a clip class, and thus can be used to identify the clip class. In movie (album) type short video, the speech signal mainly consists of a series of sounds such as author's sound, intra-drama character conversation, intra-drama scene dubbing, background music added by the author, etc., wherein the intra-drama sound effect and the background music belong to noise signals, which seriously impair the performance of the classifier. By converting the audio data into the second character information, the redundancy of the input signal can be reduced while suppressing noise, and the meaning of the text can be analyzed without analyzing the tone of the speaker, thereby reducing the analysis difficulty and the data processing amount. The second text information may be obtained by, for example, Automatic Speech Recognition (ASR), and specific implementation of the ASR is not discussed in the embodiments of the present disclosure.
Furthermore, subtitle information in the target video can be extracted to supplement the second text information. Specifically, the audio data may be converted into text information, the subtitle information in the target video may be extracted, and the superimposition information of the converted text information and the subtitle information may be used as the second text information. It is understood that if the subtitle information is not extracted, the text information converted from the audio data is directly used as the second text information.
No matter the first character information or the second character information, the information content of the video domain can be efficiently reflected, the problem that the performance of the classifier is seriously influenced by the inconsistency of the image domain information and the video domain information in the related technology is solved, and the accuracy of the classification result is improved. And the data volume of the text information is small, the analysis is convenient, the data processing amount can be sufficiently reduced, the memory occupation can be reduced, and the processing speed is increased, so that the application scene of the video classification method disclosed by the invention is promoted.
The determination unit 302 may determine the type of the target video according to the text information. Specifically, it may be determined whether the type of the target video is a catkin type only according to the first text information, and at this time, the obtaining unit 301 may obtain only the first text information correspondingly, so as to reduce the data processing amount; or, it may be determined whether the type of the target video is a clip type only according to the second text information, and at this time, the obtaining unit 301 may obtain only the second text information correspondingly, so as to reduce the data processing amount; of course, the specific type of the target video can also be specified by combining the first text information and the second text information. It can be understood that, for the first two cases, the first judgment is performed only by the first text information or the second text information, and the third case may be that the first judgment is performed by one of the first text information and the second text information, and then whether to perform the second judgment and how to perform the second judgment are determined by combining the result of the first judgment, where the specific method for determining the judgment by using the first text information or the second text information may be general, which is an implementation manner of the present disclosure and falls within the protection scope of the present disclosure.
In some embodiments, for a scheme using the first text information and the second text information at the same time, the determining unit 302 may optionally first determine the type of the target video according to the first text information. Namely, the first judgment is firstly carried out by means of the first character information. Since the data size of the first character information is small, the first character information can be preferentially processed. If the type of the target video can be determined according to the first character information, the second character information does not need to be processed continuously, the calculation load is reduced, and the data processing efficiency is improved.
Specifically, a plurality of target keywords for indicating a content extension type video may be selected first, the first text information is compared with a plurality of preset target keywords, if the first text information includes the target keywords and the number of the target keywords included in the first text information is greater than or equal to a set amount, the type of the target video is determined to be the content extension type, otherwise, the specific type of the target video is determined to be temporarily unable to be determined, and further determination is required. For the case where the content expansion class is a floral class, the target keywords may include, for example, floral, cold knowledge, behind scenes, before opening photos, and colored eggs.
The set amount may be specifically set to adjust the severity of the classification, and may be set to 1, for example, so that the type of the target video is considered to be a feature type as long as the first text information includes the target keyword. Specifically, the target video may include at least one video (e.g., a short video), and accordingly, the first text information is obtained based on a cover frame of the at least one video. In other words, the target video may be a single video or a collection of multiple videos. When the target video is a collection, the first text information is a collection of text information obtained based on cover frames of all short videos in the collection.
Alternatively, the set amount may be a fixed value to simplify the scheme, reduce the amount of calculation, and reduce the recognition omission of the floc-like short video. Still taking the setting amount equal to 1 as an example, as long as one video in the collection contains the target keyword, the type of the collection is determined to be a catkin type. The setting amount can also adjust specific values according to the number of videos contained in the target video, for example, the setting amount is set in a grading mode, and the more the number of videos contained in the target video is, the larger the setting amount is, so as to reduce misjudgment. In this case, by appropriately setting the set amount, it is possible to determine that the type of a collection containing only a very small amount of videos of the flocs is not the flocs.
Alternatively, when the type of the target video cannot be determined as a result of the first judgment, two situations may occur, namely, the type of the target video cannot be determined to be a floc, and a conclusion that the type of the target video is not a floc cannot be obtained temporarily. For the first case, the determining unit 302 may determine the type of the target video according to the first text information and the second text information. That is, it may be determined that the type of the target video is not a floc type but a clip type or an analysis type according to the first text information, and then it may be determined whether the type of the target video is a clip type or an analysis type according to the second text information. In the second case, for example, when the set amount is greater than or equal to 2, if the first text information includes one target keyword, it is not possible to conclude that the type of the target video is the feature type, but since the target keyword is included, there is a possibility that the type of the target video is the feature type, and in this case, the determination result may be made to be consistent with the first case, and the second text information may be further analyzed. If further analysis is performed, according to actual conditions, it may be determined that the type of the target video is a clip type directly according to the second text information, that is, the determining unit 302 may determine the type of the target video according to the second text information; it may also be determined that the type of the target video is not a clip type, but a comment type or a flower floc type, and at this time, it needs to be determined that the type of the target video is the flower floc type by combining information that the first text information includes a target keyword, that is, the determining unit 302 may determine the type of the target video according to the first text information and the second text information.
In some embodiments, optionally, the determining unit 302 may extract feature data of the second text information, where the feature data includes a person-to-speak feature and a speech rate feature; from the characterization data, it is determined whether the type of the target video is a content presentation class (e.g., a clip class). Compared with clip-type videos, the author of the comment-type video has the advantages that in order to explain the movie scenario in a few minutes, the speed of speech is high, the sound of the author is introduced into the catwalk-type video, and the speed of speech is high and normal; in addition, explanation and batting are generally carried out at the perspective of a third person, the situation that the pronouns of the first person and the second person are rarely adopted during explanation, most characters in the video sound appear in a dialogue mode, and the pronouns of the first person and the second person are frequently used. Based on this, the determining unit 302 may be configured to: if the person-named pronoun feature and the speech speed feature simultaneously meet the conversation condition, determining the type of the target video as a content display type; and if the person-name pronoun feature and the speech speed feature do not meet the conversation condition, determining the type of the target video as a content expansion class or a third-party explanation class, wherein the conversation condition means that a certain amount of first and second person-name pronouns exist in the second text information, and the speech speed corresponding to the second text information is slower and smaller than the set speech speed. For the case that the part of the person-named pronoun feature and the speech speed feature meets the conversation condition, the person-named pronoun feature and the speech speed feature can be considered as not meeting the conversation condition, the person-named pronoun feature and the speech speed feature can also be considered as meeting the conversation condition, the person-named pronoun feature and the speech speed feature can be further combined with the first character information for analysis, and the disclosure is not limited. Specifically, how to judge whether the feature data meets the dialogue condition or not can be judged, the person pronoun feature and the speech speed feature can be quantized and then whether the dialogue condition is met or not can be determined according to the magnitude relation between the dialogue value obtained by quantization and the dialogue threshold value, during quantization, the quantization threshold values can be respectively set for the person pronoun feature and the speech speed feature, for example, the number of the first person pronoun and the number of the second person pronoun need to reach the set threshold value, the speech speed needs to reach the set speech speed, the person pronoun feature and the speech speed feature can be respectively analyzed, and then the person pronoun feature and the speech speed feature can be collected after quantization to obtain the dialogue value, and whether the dialogue condition is met or not can be judged by analyzing the dialogue value. Taking the case of quantitative summarization as an example, for example, if the dialog value is greater than or equal to the dialog threshold value, the person-called pronoun feature and the speech rate feature are considered to simultaneously satisfy the dialog condition, the target video is determined to be a clip-type video, otherwise, the person-called pronoun feature and the speech rate feature are considered to not satisfy the dialog condition, and the target video is determined not to be a clip-type video but to be an explanation-type video or a catkin-type video; if the conversation threshold value includes a first conversation threshold value and a second conversation threshold value, the first conversation threshold value is larger than the second conversation threshold value, the person pronouncing feature and the speech speed feature are considered to simultaneously meet the conversation condition if the conversation value is larger than or equal to the larger first conversation threshold value, the target video is determined to be a clip-type video, the person pronouncing feature and the speech speed feature are considered to not meet the conversation condition if the conversation value is smaller than the second conversation threshold value, the target video portion is determined not to be the clip-type video but to be an explanation-type video or a flower-like video, and when the conversation value is smaller than the first conversation threshold value and larger than or equal to the second conversation threshold value, the person pronouncing feature and the speech speed feature portion are considered to meet the conversation condition, and the analysis can be further combined with the first text information. This is the implementation of the present exemplary embodiment and is not limiting herein. By extracting the person-name pronouncing characteristics and the speech speed characteristics in the second character information as characteristic data, the editing videos can be reliably distinguished from the explaining videos and the catkin videos.
Further, when the target video includes at least two videos, the determining unit 302 may be configured to: and if the person-named pronoun feature and the speech speed feature of at least two videos simultaneously meet the conversation condition, determining the type of the target video as a content display type. By comprehensively analyzing all at least two videos contained in the target video, the accuracy of classification is improved. Specifically, for the case that the target video includes at least two videos, the dialog condition may be appropriately adjusted, for example, each video may individually satisfy the dialog condition for a single video, or may ensure that a certain proportion of videos satisfy the dialog condition, or may determine a statistical value of quantization results of at least two videos after quantizing the person-named pronouncing feature and the speech rate feature, determine the type of the target video according to the statistical value, and specifically refer to the foregoing quantization analysis method for a single video.
To summarize, when the type of the target video cannot be determined from the first text information, there are four cases: firstly, determining that the target video is not a floc type according to the first text information, determining that the type of the target video is a clip type or a comment type according to the first text information, and then determining that the type of the target video is specifically the clip type or the comment type according to the second text information. Secondly, no conclusion can be drawn according to the first text information, but the target video is determined to be a clip type according to the second text information. Third, no conclusion can be drawn from the first textual information, but the target video is not floculation-like, as in the first case. Fourthly, according to the first character information, if no conclusion can be drawn, the target video is determined not to be a clip type according to the second character information, and then the target video is determined to be a flower type by combining the situation that the conclusion cannot be drawn according to the first character information and the situation that the target video is a flower type video exists.
For the scheme of using the first text information and the second text information simultaneously, in other embodiments, optionally, the determining unit 302 may determine the type of the target video according to the second text information; and when the type of the target video cannot be determined according to the second text information, determining the type of the target video according to the first text information and the second text information or determining the type of the target video according to the first text information. Specifically, it may be determined whether the type of the target video is a clip type according to the second text information, if so, the determination is completed, and if not, further analysis is required, which includes the following four cases: firstly, determining that the target video is not a clip type according to the second text information, determining that the type of the target video is a comment type or a flower type according to the second text information, and then determining that the type of the target video is specifically a comment type or a flower type according to the first text information. Secondly, no conclusion can be drawn according to the second text information, for example, a large number of first and second person pronouns exist in the second text information, but the speech speed is high, or only a few first and second person pronouns exist, but the speech speed is normal, and the conversation condition is partially met, so that the target video is possible to be a clip-type video, but still the target video is regarded as a first case, and the first case is processed; thirdly, no conclusion can be drawn according to the second text information, the possibility that the target video is a clip-type video exists, but the target video is determined to be a floc-type video according to the first text information. Fourthly, according to the first character information, no conclusion can be drawn, if the target video is possible to be the clip video, the target video is determined not to be the floc video according to the first character information, and then the target video is determined to be the clip video according to the situation that the conclusion cannot be drawn according to the second character information, and the target video is possible to be the clip video.
Specifically, the determining unit 302 may count the number of the first and second person pronouns appearing in the specified period of the second text information as the person pronoun feature. The more the first and second person-named generations, the less likely the target video is to be a commentary class and the more likely it is to be a clip class or a catkin class. The determining unit 302 may also count a duration of the text continuously appearing in the specified period of time in the second character information (which may be referred to as a text duration), and/or count a ratio of the number of words of the text to the duration of the text continuously appearing in the specified period of time (which may be referred to as a character density), as a speech rate characteristic. The text duration can reflect whether the target video has a speech for a long time, and the larger the text duration is, the higher the possibility that the target video is a comment class is. The text density is helpful to fully reflect the speed of speech of the audio, and the faster the speed of speech, the higher the possibility that the target video is the comment class.
Fig. 4 is a block diagram of an electronic device according to an example embodiment of the present disclosure.
Referring to fig. 4, the electronic device 400 comprises at least one memory 401 and at least one processor 402, the at least one memory 401 having stored therein a set of computer-executable instructions that, when executed by the at least one processor 402, perform a video classification method according to an exemplary embodiment of the present disclosure.
By way of example, the electronic device 400 may be a PC computer, tablet device, personal digital assistant, smartphone, or other device capable of executing the set of instructions described above. Here, the electronic device 400 need not be a single electronic device, but can be any collection of devices or circuits that can execute the above instructions (or sets of instructions) individually or in combination. The electronic device 400 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the electronic device 400, the processor 402 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor 402 may execute instructions or code stored in the memory 401, wherein the memory 401 may also store data. The instructions and data may also be transmitted or received over a network via a network interface device, which may employ any known transmission protocol.
The memory 401 may be integrated with the processor 402, for example, by having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 401 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 401 and the processor 402 may be operatively coupled or may communicate with each other, such as through I/O ports, network connections, etc., so that the processor 402 can read files stored in the memory.
In addition, the electronic device 400 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of electronic device 400 may be connected to each other via a bus and/or a network.
According to an embodiment of the present disclosure, there may also be provided a computer-readable storage medium, wherein when the instructions in the computer-readable storage medium are executed by at least one processor, the at least one processor is caused to perform a video classification method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a hard disk, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an embodiment of the present disclosure, there may also be provided a computer program product comprising computer instructions which, when executed by at least one processor, cause the at least one processor to perform a video classification method according to the present disclosure.
According to the video classification method, the video classification device, the electronic equipment and the computer-readable storage medium, the image data can be used as the input parameters of video classification instead of the text data, the information content of the video domain can be efficiently reflected, the problem of inconsistent information of the image domain and the video domain is solved, and the accuracy of the classification result is improved. And the data volume of the text information is small, the analysis is convenient, the data processing amount can be sufficiently reduced, the memory occupation can be reduced, the processing speed is increased, and the popularization of the application scenes of the video classification method and the video classification device is facilitated.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:视觉动画显示方法及相关设备