Video detection method, video detection equipment and computer readable storage medium
1. A video detection method, comprising:
acquiring a video to be detected;
based on the video to be detected, video recall is carried out from a video resource library to obtain a video to be compared;
separating audio information corresponding to the video to be detected to obtain audio to be detected, and extracting the characteristics of the audio to be detected on audio characteristics to obtain an audio fingerprint to be detected, wherein the audio characteristics are the characteristics of the audio information on hearing;
separating audio information corresponding to the video to be compared to obtain audio to be compared, and extracting the characteristics of the audio to be compared on the audio characteristics to obtain an audio fingerprint to be compared;
and comparing the audio fingerprint to be detected with the audio fingerprint to be compared, and determining a video detection result of the video to be detected aiming at the video to be compared based on a comparison result, wherein the video detection result is a detection result of whether the video to be detected is a repeated video aiming at the video to be compared.
2. The method according to claim 1, wherein the extracting the features of the audio to be detected on the audio characteristics to obtain the audio fingerprint to be detected comprises:
extracting multiple sub-frames of audio to be detected from the audio to be detected based on a preset frame unit;
extracting the characteristics of each frame of sub-audio to be detected in the multi-frame sub-audio to be detected on the audio characteristics to obtain initial sub-audio to be detected fingerprints;
and performing dimensionality reduction on the initial sub audio fingerprint to be detected to obtain a sub audio fingerprint to be detected, so as to obtain a multi-frame sub audio fingerprint to be detected corresponding to the audio to be detected, wherein the audio fingerprint to be detected comprises the multi-frame sub audio fingerprint to be detected.
3. The method according to claim 2, wherein before extracting multiple sub-frames of audio to be detected from the audio to be detected based on the preset frame unit, the method further comprises:
pre-emphasis processing is carried out on the audio to be detected to obtain the audio to be framed;
based on a preset frame unit, extracting multiple frames of sub-audio to be detected from the audio to be detected, including:
and extracting the multi-frame sub-audio to be detected from the audio to be framed based on the preset frame unit.
4. The method according to claim 2, wherein the extracting multiple sub-frames of audio to be detected from the audio to be detected based on preset frame units comprises:
sampling the audio to be detected based on a preset sampling frequency to obtain a plurality of sampling points;
selecting the sampling points with preset sampling points in sequence from the first sampling point to combine into a frame of sub-audio to be detected, and continuously selecting the sampling points with preset sampling points from the position corresponding to the preset overlapped sampling points before the selection ending position of the sampling points to combine into the next frame of sub-audio to be detected until the first sampling point is selected
And selecting and processing the plurality of sampling points to obtain the multi-frame sub-audio to be detected, wherein the preset frame unit is determined based on a preset sampling frequency and the number of the preset sampling points.
5. The method according to any one of claims 2 to 4, wherein the extracting features of each frame of sub-audio to be detected in the plurality of frames of sub-audio to be detected on the audio characteristics to obtain an initial sub-audio to be detected fingerprint comprises:
windowing each frame of sub audio to be detected in the multi-frame of sub audio to be detected to obtain sub audio to be converted;
converting the sub audio to be converted into energy distribution on a frequency domain to obtain a sub frequency spectrum to be detected, and acquiring a power spectrum of the sub frequency spectrum to be detected to obtain a sub power spectrum to be detected;
smoothing the sub-to-be-detected power spectrum to obtain a sub-smooth power spectrum;
carrying out inverse transformation on the logarithmic energy of the sub-smooth power spectrum, and acquiring audio characteristic parameters of a preset order of an inverse transformation result;
and acquiring the difference parameter of the audio characteristic parameter and the frame energy of each frame of the sub audio to be detected, so as to obtain the initial sub audio fingerprint to be detected, which comprises one or more of the audio characteristic parameter, the difference parameter and the frame energy.
6. The method according to claim 4, wherein the performing dimension reduction on the initial sub-audio fingerprint to be detected to obtain sub-audio fingerprints to be detected comprises:
removing lowest frequency features of each initial sampling point audio fingerprint in the initial sub audio fingerprints to be detected to obtain S-1 dimensional features, wherein the initial sub audio fingerprints to be detected comprise the initial sampling point audio fingerprints with the preset number of sampling points, each initial sampling point audio fingerprint comprises S-dimensional features, and S is a positive integer greater than 1;
performing clustering dimensionality reduction on the S-1 dimensional features based on a preset category number to obtain a clustering category of the preset category number;
and determining the clustering center information of the clustering category as the sampling point audio fingerprint of each initial sampling point audio fingerprint, so as to obtain the sub-audio fingerprint to be detected corresponding to the initial sub-audio fingerprint to be detected, wherein the sub-audio fingerprint to be detected comprises the sampling point audio fingerprint with the preset sampling point number.
7. The method according to any one of claims 1 to 4, wherein the comparing the audio fingerprint to be detected with the audio fingerprint to be compared and the determining the video detection result of the video to be detected for the video to be compared based on the comparison result comprise:
comparing each frame of sub audio fingerprints to be detected in the audio fingerprints to be detected with each frame of sub audio fingerprints to be compared in the audio fingerprints to be compared one by one to obtain a comparison result corresponding to each frame of sub audio fingerprints to be detected and each frame of sub audio fingerprints to be compared;
when preset rule information exists in the comparison result, determining the video detection result of the video to be detected, which is a repeated video for the video to be compared, wherein the preset rule information is a similarity trend between each frame of sub audio fingerprint to be detected and each frame of sub audio fingerprint to be compared;
and when the preset rule information does not exist in the comparison result, determining the video detection result of the video to be detected, which is a non-repetitive video for the video to be compared.
8. The method according to claim 7, wherein after obtaining the comparison result corresponding to each frame of sub audio fingerprint to be detected and each frame of sub audio fingerprint to be compared, the method further comprises:
taking each frame of sub audio fingerprint to be detected as a one-dimensional attribute of a matrix, taking each frame of sub audio fingerprint to be compared as another one-dimensional attribute of the matrix, taking the comparison result as an element of the matrix, and constructing a similar matrix;
converting the similar matrix into a similar matrix map based on the corresponding relation between a preset similar value and the display color;
when the color difference value between each display color at the diagonal position in the similar matrix diagram and a preset color is smaller than a color difference threshold value, determining that the preset rule information exists in the comparison result;
and when the color difference value between each display color at the diagonal position in the similarity matrix diagram and a preset color is not less than the color difference threshold value, determining that the preset rule information does not exist in the comparison result.
9. The method according to any one of claims 1 to 4, wherein the retrieving a video from a video repository based on the video to be detected to obtain a video to be compared comprises:
acquiring video recall features corresponding to the video to be detected, wherein the video recall features comprise one or more of content semantic features, text semantic features, title semantic features, text semantic features, frame image semantic features and cover map semantic features;
acquiring to-be-recalled features respectively corresponding to each video in the video resource library, wherein the to-be-recalled features correspond to the video recall features in feature types;
determining target features to be recalled which are similar to the video recall features from the various features to be recalled based on recall similarity values between the video recall features and the various features to be recalled respectively, wherein the recall similarity values comprise one or more of Euclidean distance, a vector dot product value and a cosine similarity value;
and taking the video corresponding to the target to-be-recalled feature in the video resource library as a recalled video, so as to obtain the to-be-compared video belonging to the recalled video.
10. The method of claim 9, wherein the determining, from the respective features to be recalled, a target feature to be recalled that is similar to the video recall feature based on recall similarity values between the video recall feature and the respective features to be recalled comprises:
acquiring recall feature indexes corresponding to the video recall features and recall feature indexes corresponding to the to-be-recalled features, wherein the to-be-recalled features are in one-to-one correspondence with the recall feature indexes;
and respectively taking the matching degree of the recall feature index and each recall feature index as the recall similarity value, and determining the target to-be-recalled features similar to the video recall feature from each to-be-recalled features on the basis of the recall similarity value.
11. The method according to any one of claims 1 to 4, wherein the acquiring the video to be detected comprises:
receiving a video detection request sent by task scheduling equipment, wherein the video detection request is generated by the task scheduling equipment in response to a video uploading request sent by video production end equipment;
and responding to the video detection request, and acquiring the video to be detected from a content storage device.
12. The method according to any one of claims 1 to 4, wherein after comparing the audio fingerprint to be detected with the audio fingerprint to be compared and determining a video detection result of the video to be detected for the video to be compared based on the comparison result, the method further comprises:
when the video detection result is that the video to be detected is a repeated video aiming at the video to be compared, the video detection result is sent to subsequent detection equipment so that the video detection result is sent to the subsequent detection equipment
And the subsequent detection equipment generates a subsequent detection request aiming at the video detection result and responds to the subsequent detection request to obtain a target detection result of the video to be detected.
13. The method according to any one of claims 1 to 4, wherein after comparing the audio fingerprint to be detected with the audio fingerprint to be compared and determining a video detection result of the video to be detected for the video to be compared based on the comparison result, the method further comprises:
when the video detection result is that the video to be detected is a non-repetitive video aiming at the video to be compared, the video to be detected is sent to task scheduling equipment so as to enable the video to be detected to be a non-repetitive video
The task scheduling equipment pushes the video to be detected to content consumption end equipment through content distribution equipment based on the acquired recommendation information so as to enable the video to be detected to be transmitted to the content consumption end equipment
And the content consumption end equipment plays the video to be detected.
14. A video detection device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of any one of claims 1 to 13 when executing executable instructions stored in the memory.
15. A computer-readable storage medium having stored thereon executable instructions for, when executed by a processor, implementing the method of any one of claims 1 to 13.
Background
With the rapid development of social networks, video is becoming one of the dominant content modalities of the mobile internet. The video has the characteristics of strong participation, high transmission value and the like, so the uploading amount of the video is larger and larger; therefore, a video needs to be quickly audited to complete the release of the video.
Generally, in order to review videos, artificial intelligence technology is generally used to detect videos from the aspect of pictures and titles of the videos to determine whether the detected videos and uploaded videos constitute a duplicate video. However, in the above-described video detection process, since the detection is performed based on the video picture and the title, for example, for the "lecture series video", the videos are similar in terms of picture and title, and therefore, the "lecture series video" is often determined as a duplicate video by mistake in the obtained detection result. Therefore, the accuracy of video detection is low.
Disclosure of Invention
The embodiment of the application provides a video detection method, video detection equipment and a computer-readable storage medium, which can improve the accuracy of video detection.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a video detection method, which comprises the following steps:
acquiring a video to be detected;
based on the video to be detected, video recall is carried out from a video resource library to obtain a video to be compared;
separating audio information corresponding to the video to be detected to obtain audio to be detected, and extracting the characteristics of the audio to be detected on audio characteristics to obtain an audio fingerprint to be detected, wherein the audio characteristics are the characteristics of the audio information on hearing;
separating audio information corresponding to the video to be compared to obtain audio to be compared, and extracting the characteristics of the audio to be compared on the audio characteristics to obtain an audio fingerprint to be compared;
and comparing the audio fingerprint to be detected with the audio fingerprint to be compared, and determining a video detection result of the video to be detected aiming at the video to be compared based on a comparison result, wherein the video detection result is a detection result of whether the video to be detected is a repeated video aiming at the video to be compared.
The embodiment of the application provides a video detection device, includes:
the video acquisition module is used for acquiring a video to be detected;
the video recall module is used for recalling videos from a video resource library based on the videos to be detected to obtain videos to be compared;
the characteristic acquisition module is used for separating audio information corresponding to the video to be detected to obtain audio to be detected, and extracting the characteristics of the audio to be detected on the audio characteristics to obtain the audio fingerprint to be detected, wherein the audio characteristics are the characteristics of the audio information on the sense of hearing;
the characteristic acquisition module is further used for separating the audio information corresponding to the video to be compared to obtain an audio to be compared, and extracting the characteristics of the audio to be compared on the audio characteristic to obtain an audio fingerprint to be compared;
and the video detection module is used for comparing the audio fingerprint to be detected with the audio fingerprint to be compared and determining a video detection result of the video to be detected aiming at the video to be compared based on a comparison result, wherein the video detection result is a detection result of whether the video to be detected is a repeated video aiming at the video to be compared.
In this embodiment of the application, the feature obtaining module is further configured to extract multiple sub-frames of audio to be detected from the audio to be detected based on a preset frame unit; extracting the characteristics of each frame of sub-audio to be detected in the multi-frame sub-audio to be detected on the audio characteristics to obtain initial sub-audio to be detected fingerprints; and performing dimensionality reduction on the initial sub audio fingerprint to be detected to obtain a sub audio fingerprint to be detected, so as to obtain a multi-frame sub audio fingerprint to be detected corresponding to the audio to be detected, wherein the audio fingerprint to be detected comprises the multi-frame sub audio fingerprint to be detected.
In the embodiment of the present application, the characteristic obtaining module is further configured to perform pre-emphasis processing on the audio to be detected, so as to obtain the audio to be framed.
In this embodiment of the application, the feature obtaining module is further configured to extract the multiple sub-frames of audio to be detected from the audio to be framed based on the preset frame unit.
In the embodiment of the application, the characteristic obtaining module is further configured to sample the audio to be detected based on a preset sampling frequency to obtain a plurality of sampling points; and selecting a sampling point combination of preset sampling points in sequence from a first sampling point to form a frame of sub-audio to be detected, continuously selecting a position corresponding to a preset overlapped sampling point before the selection ending position of the sampling point, selecting the sampling point combination of the preset sampling points to form a next frame of sub-audio to be detected until the plurality of sampling points are selected and processed, and obtaining the multi-frame sub-audio to be detected, wherein the preset frame unit is determined based on preset sampling frequency and the preset sampling points.
In this embodiment of the application, the feature obtaining module is further configured to perform windowing on each frame of sub audio to be detected in the multiple frames of sub audio to be detected to obtain sub audio to be converted; converting the sub audio to be converted into energy distribution on a frequency domain to obtain a sub frequency spectrum to be detected, and acquiring a power spectrum of the sub frequency spectrum to be detected to obtain a sub power spectrum to be detected; smoothing the sub-to-be-detected power spectrum to obtain a sub-smooth power spectrum; carrying out inverse transformation on the logarithmic energy of the sub-smooth power spectrum, and acquiring audio characteristic parameters of a preset order of an inverse transformation result; and acquiring the difference parameter of the audio characteristic parameter and the frame energy of each frame of the sub audio to be detected, so as to obtain the initial sub audio fingerprint to be detected, which comprises one or more of the audio characteristic parameter, the difference parameter and the frame energy.
In this embodiment of the application, the feature obtaining module is further configured to remove lowest frequency features from each initial sampling point audio fingerprint in the initial sub-audio fingerprint to be detected to obtain S-1 dimensional features, where the initial sub-audio fingerprint to be detected includes the initial sampling point audio fingerprint of the preset number of sampling points, each initial sampling point audio fingerprint includes S dimensional features, and S is a positive integer greater than 1; performing clustering dimensionality reduction on the S-1 dimensional features based on a preset category number to obtain a clustering category of the preset category number; and determining the clustering center information of the clustering category as the sampling point audio fingerprint of each initial sampling point audio fingerprint, so as to obtain the sub-audio fingerprint to be detected corresponding to the initial sub-audio fingerprint to be detected, wherein the sub-audio fingerprint to be detected comprises the sampling point audio fingerprint with the preset sampling point number.
In this embodiment of the application, the video detection module is further configured to compare each frame of sub-audio fingerprints to be detected in the audio fingerprints to be detected with each frame of sub-audio fingerprints to be compared in the audio fingerprints to be compared one by one, so as to obtain a comparison result corresponding to each frame of sub-audio fingerprints to be detected and each frame of sub-audio fingerprints to be compared; when preset rule information exists in the comparison result, determining the video detection result of the video to be detected, which is a repeated video for the video to be compared, wherein the preset rule information is a similarity trend between each frame of sub audio fingerprint to be detected and each frame of sub audio fingerprint to be compared; and when the preset rule information does not exist in the comparison result, determining the video detection result of the video to be detected, which is a non-repetitive video for the video to be compared.
In this embodiment of the application, the video detection module is further configured to use each frame of sub-audio fingerprints to be detected as a one-dimensional attribute of a matrix, use each frame of sub-audio fingerprints to be compared as another dimensional attribute of the matrix, use the comparison result as an element of the matrix, and construct a similar matrix; converting the similar matrix into a similar matrix map based on the corresponding relation between a preset similar value and the display color; when the color difference value between each display color at the diagonal position in the similar matrix diagram and a preset color is smaller than a color difference threshold value, determining that the preset rule information exists in the comparison result; and when the color difference value between each display color at the diagonal position in the similarity matrix diagram and a preset color is not less than the color difference threshold value, determining that the preset rule information does not exist in the comparison result.
In an embodiment of the present application, the video recall module is further configured to acquire a video recall feature corresponding to the video to be detected, where the video recall feature includes one or more of a content semantic feature, a text semantic feature, a title semantic feature, a body semantic feature, a frame image semantic feature, and a cover image semantic feature; acquiring to-be-recalled features respectively corresponding to each video in the video resource library, wherein the to-be-recalled features correspond to the video recall features in feature types; determining target features to be recalled which are similar to the video recall features from the various features to be recalled based on recall similarity values between the video recall features and the various features to be recalled respectively, wherein the recall similarity values comprise one or more of Euclidean distance, a vector dot product value and a cosine similarity value; and taking the video corresponding to the target to-be-recalled feature in the video resource library as a recalled video, so as to obtain the to-be-compared video belonging to the recalled video.
In this embodiment of the present application, the video recall module is further configured to obtain recall feature indexes corresponding to the video recall features and recall feature indexes corresponding to the to-be-recalled features, where the to-be-recalled features and the recall feature indexes are in one-to-one correspondence; and respectively taking the matching degree of the recall feature index and each recall feature index as the recall similarity value, and determining the target to-be-recalled features similar to the video recall feature from each to-be-recalled features on the basis of the recall similarity value.
In this embodiment of the present application, the video obtaining module is further configured to receive a video detection request sent by a task scheduling device, where the video detection request is generated by the task scheduling device in response to a video upload request sent by a video production end device; and responding to the video detection request, and acquiring the video to be detected from a content storage device.
In this embodiment of the application, the video detection apparatus further includes a result processing module, configured to send the video detection result to subsequent detection equipment when the video detection result is that the video to be detected is a repeated video for the video to be compared, so that the subsequent detection equipment generates a subsequent detection request for the video detection result, and obtains a target detection result of the video to be detected in response to the subsequent detection request.
In this embodiment of the application, the video detection device is further configured to send the video to be detected to the task scheduling device when the video detection result indicates that the video to be detected is a non-repetitive video for the video to be compared, so that the task scheduling device pushes the video to be detected to the content consuming device through the content distribution device based on the obtained recommendation information, so that the content consuming device plays the video to be detected.
The embodiment of the application provides a video detection device, including:
a memory for storing executable instructions;
and the processor is used for realizing the video detection method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium to implement the video detection method provided by the embodiment of the present application.
The embodiment of the application has at least the following beneficial effects: when the video to be detected is compared with the recalled video to be compared, the video detection result of whether the video to be detected is the repeated video or not is determined by comparing the characteristics of the audio information of the video to be detected on the audio characteristic with the characteristics of the audio information of the video to be compared on the audio characteristic; the audio characteristics are auditory characteristics of audio information, such as volume, tone quality and tone color, and videos such as "a series of lectures videos", and are different in audio characteristics; therefore, it is possible to accurately identify whether or not a video having a similar picture or title, such as "a series of videos of lectures", is a repeated video, and thus, it is possible to improve the accuracy of video detection.
Drawings
FIG. 1 is a schematic diagram of an exemplary video detection process;
fig. 2 is a schematic diagram of an alternative architecture of a video detection system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a server in fig. 2 according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of an alternative video detection method provided in the embodiment of the present application;
fig. 5 is a schematic flow chart of another alternative video detection method provided in the embodiment of the present application;
fig. 6 is a schematic diagram illustrating an alternative interaction flow of a video detection method according to an embodiment of the present application;
FIG. 7 is a block diagram illustrating components of an exemplary video inspection system according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an exemplary process for obtaining an audio fingerprint according to an embodiment of the present application;
FIG. 9 is a diagram illustrating an exemplary linear relationship between Mel frequency and audio frequency provided by an embodiment of the present application;
FIG. 10 is a diagram illustrating the filtering results of an exemplary set of Mel-scale triangular filters provided by embodiments of the present application;
FIG. 11 is a schematic diagram of an exemplary matrix similarity map provided by an embodiment of the present application;
fig. 12 is a schematic diagram of another exemplary matrix similarity map provided by an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
2) Machine Learning (ML) is a multi-domain cross discipline, relating to multi-domain disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. Specially researching how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills; reorganizing the existing knowledge structure to improve the performance of the knowledge structure. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning generally includes techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, and inductive learning.
3) An artificial Neural Network is a mathematical model simulating the structure and function of a biological Neural Network, and exemplary structures of the artificial Neural Network in the embodiment of the present application include Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and the like.
4) In response to the condition or state on which the process being performed depends, the one or more requests being performed may be in real time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the execution order of the plurality of requests executed without particular description.
5) A feed stream, called a message source, also called a source material, feed, information provider, contribution, summary, source, news subscription, web source, etc., is a data format through which a website propagates up-to-date information to users, i.e., a feed stream is an information stream that is continuously updated and presented to users. The Feeds stream is usually arranged in a time axis manner, and the time axis is the most basic display form of the Feeds stream. A prerequisite for a user to be able to subscribe to a website is that the website provides a source of messages. In addition, converging the Feeds stream together is called polymerization (aggregation), and the software for polymerization is called polymerizer (aggregator); the aggregator is software specially used for subscribing information of a website, such as RSS (Simple Syndication) reader, "feed" reader, news reader, and the like.
6) MCN (Multi-Channel Network) is used to guarantee continuous output of content by combining PGC content with strong support of capital, thereby eventually realizing stable business change. That is, the MCN helps content producers (including video production end devices in the embodiments of the present application) to concentrate on content creation on one hand, and docks platforms, fans for packaging, enhances promotion, and promotes revelation on the other hand.
7) PGC (Professional Generated Content, such as video website, or expert Generated Content, such as microblog), is used to refer to a Content generator for Content personalization, view diversification, democration, social relationship virtualization, also called PPC (Professional-Generated Content).
8) Short videos, namely short video, which is an internet content transmission mode, generally refers to video transmission contents transmitted on new internet media within 5 minutes; typically high-frequency pushed video content, ranging from seconds to minutes, played on various new media platforms, suitable for viewing in mobile and short-time leisure. The contents integrate the topics of skill sharing, humorous work, fashion trend, social hotspots, street interviews, public education, advertising creativity, business customization and the like. Because the content of the short video is shorter, the short video can be individually sliced or become a series of columns. In addition, the short video production is different from micro-movie and live broadcast, does not need to have specific expression form and team configuration requirement, has the characteristics of simple production flow, low production threshold, strong participation and the like, and has more spreading value than live broadcast; the advent of short videos enriches the form of new media native advertisements. Therefore, short video will gradually become one of the dominant content forms of the mobile internet, replace the consumption of the teletext content to some extent, and gradually take a leading role in teletext media such as news and social platforms. Short videos are usually displayed in a Feeds stream form for quick refreshing by users; for example, a new Feed of the front page of a Facebook (Facebook) is a novel aggregator, a Feed is a friend or an object concerned, video content is the dynamic state of public publishing of the friend or the object concerned, and when the friends are more and active, continuously updated content can be received; in addition, microblog, discordant "qq" viewpoints are similar. In the embodiment of the application, the video to be detected comprises a short video.
9) Videos, which are contents recommended to be read by a User of a Content consuming device, include vertically formatted small videos and horizontally formatted short videos, and the like, are generally derived from PGC, MCN, or UGC (User-generated Content), and are provided in the form of Feeds streams. The content recommended to the user of the content consumption end equipment also comprises pictures and texts, wherein the pictures and texts are usually short pictures and texts which are actively edited and released after a public number is opened from the media and comprise vertical small pictures and texts and horizontal short pictures and texts.
10) Clustering, an unsupervised classification approach; such as K-means clustering (K-means clustering algorithm), an iterative solved cluster analysis algorithm, comprising the steps of: if the data are divided into K groups, firstly, randomly selecting K objects as initial clustering centers, then calculating the distance between each object and each seed clustering center, and allocating each object to the clustering center closest to the object; the cluster centers and the objects assigned to them represent a cluster; each sample is allocated, and the clustering center of the cluster is recalculated according to the existing object in the cluster; this process will be repeated until some termination condition is met. The termination condition may be that no (or a minimum number) of objects are reassigned to different clusters, that no (or a minimum number) cluster centers are changed again, or that the sum of squared errors is locally minimal.
11) FFmpeg, a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams; in the embodiment of the present application, audio information in video may be separated by FFmpeg.
12) Faiss, an approximate neighbor search library, provides efficient similarity search and clustering for dense vectors, and supports large-scale vector search; by means of Faiss, given a search vector, a list of database objects that are closest in euclidean distance to this vector or the highest vector dot product or the largest cosine similarity value can be obtained. In general, Faiss implements clustering and retrieval by using key technologies in parallel computing, such as theory (OpenMP), heap ordering, vector quantization method (PQ algorithm), inverted index, K-means clustering, principal component analysis, and the like.
It should be noted that artificial intelligence is a comprehensive technique in computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
In addition, the artificial intelligence technology is a comprehensive subject, and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
With the research and progress of artificial intelligence technology, the artificial intelligence technology develops research and application in a plurality of fields; for example, common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, autonomous, unmanned, robotic, smart medical, and smart customer service, etc.; with the development of the technology, the artificial intelligence technology can be applied in more fields and can play more and more important value. In the embodiment of the present application, an application of artificial intelligence in the field of video processing will be described.
It should also be noted that the social network originates from network societies, and the starting point of the network societies is an email. The internet is essentially a network between computers, and early electronic mail (E-mail) solved the problem of remote mail transmission, which is also the most popular application on the internet, and is also the starting point of social networking. The BBS (Bulletin Board System) normalizes "mass sending" and "forwarding", theoretically realizes a function of issuing information to all people and discussing topics, and becomes a platform for the spontaneous generation of early internet content. Recently, due to the overall popularization of smart phones, Wi-Fi (wireless fidelity) facilities are ubiquitous, mobile charges are generally reduced, and under the strong context of the current mobile internet era, the demand of users for receiving information is transiting from the text era to the video era.
At present, with the age of rapid development of the internet and the threshold of content production decreasing, the amount of distribution of the uploading of videos (including short videos) increases at an exponential rate; these video sources include, among other things, various content authoring organizations, such as PGC, UGC content from media and organizations, and such as point of view services based on "qq" public numbers and browsers. In the case of a large increase in the uploading amount of video, in order to ensure the security and timeliness of the distributed content and the benefit of the copyright source of the video content itself, it is necessary to complete the auditing of the video content in a short time, such as identification and processing of the yellow gambling poison, political sensitivity, content quality, and security of the content. In addition, the video platform has related subsidies and incentive mechanisms for the video content in order to encourage the creation of the content, and a video content creator uploads a large amount of similar (simple editing and modification of the video, such as video title and watermark, adding of a leader and a trailer of an advertisement, modification of audio, such as sound change, adjustment of audio playing speed and the like) or directly copies, modifies or replaces a cover page, or slightly deletes the video content in order to improve the income of the video content creator; the video processing mode prevents the starting of the main content of the normal number, and simultaneously occupies a large amount of flow, so that the ecological healthy development of the whole video content is not facilitated. Therefore, it is necessary to perform auditing on videos such as small videos and short videos, and accurately perform auditing on videos to complete distribution of videos such as small videos and short videos.
Generally, in order to audit videos, it is generally determined whether titles are similar or not and whether pictures are similar or not (including a cover picture and video content); for example, according to the salient features of the titles in the videos, the video titles are automatically extracted by adopting an accurate string and regular expression matching algorithm to form the video extension names represented by the regular expressions. When a new video file arrives, the regular expression matching algorithm is used again to judge whether the new video file appears, and therefore duplication elimination of the network video is achieved. Here, when the video titles are similar and the pictures are not similar, the content is different, such as different stages of accident processing of shooting by different people; when the titles are different but the picture contents are the same, the title is probably modified to be redistributed and is considered as repeated content; however, there are situations where the titles are the same, the pictures are the same, but the voices are different, such as a singer singing different songs at a concert, or a teacher training different chapters, or a broadcaster playing a weather forecast, or a "white and black calligraphy" video, etc., which are different videos. When video detection is performed mainly on pictures and titles, the pictures and the titles are often mistakenly similar, but video contents with different audios are determined as repeated videos, that is, the recognition effect and efficiency are low for the video contents with the similar pictures and the similar titles but different audios.
In addition, when the video is audited, the video can be detected from two aspects of a title and a picture, and the audited video can be also examined based on the characteristics of the audio; the features of the audio are, for example, "chromaprint" features. Referring to fig. 1, fig. 1 is a schematic diagram of an exemplary video detection process; as shown in FIG. 1, the exemplary video detection process includes audio extraction 1-1, audio fingerprint extraction 1-2, and audio similarity calculation 1-3; wherein: audio extraction 1-1, audio 1-12 is extracted from a video 1-11 and stored in a Storage service 1-13 (for example, COS (Cloud Object Storage service)). In the audio fingerprint extraction 1-2, firstly, reading the audio 1-12 from the storage service 1-13, splitting the audio 1-12 into overlapped segments 1-21, converting the overlapped segments 1-21 into spectrogram 1-22 by using STFT (Short-Time Fourier transform), and converting the spectrogram 1-22 into a spectrogram 1-23; then, carrying out binarization filtering on the note graphs 1-23 by using filters 1-24 to obtain filtering results 1-25, wherein the filters 1-24 are obtained by using audio training samples 1-26 and training by adopting an "asymmetry paper Boosting Algorithm" technology; finally, the audio fingerprints 1-27 of the filtering results 1-25 are retrieved and the audio fingerprints 1-27 are stored in the storage services 1-28 (e.g. storage service CKV). In the audio similarity calculation 1-3, firstly, the audio fingerprint pairs 1-31 are read from the storage services 1-28 (wherein, one audio fingerprint in the audio fingerprint pairs 1-31 is an audio fingerprint 1-27, and the other audio fingerprint is an audio fingerprint of a video to be compared, and the obtaining mode is the same as the obtaining mode of the audio fingerprints 1-27), then, the edit distance of the audio fingerprint pairs 1-31 is calculated, and the audio similarity 1-32 is obtained based on the edit distance, see formula (1):
similarity=1-d/(l1+l2) (1)
where d is the edit distance of the audio fingerprint pair 1-31 (here, the distance of the replacement operation is 2), and l1And l2The feature lengths of two audio fingerprints in the audio fingerprint pairs 1-31 respectively correspond, and similarity is audio similarity 1-32.
And finally, judging whether the audio similarity 1-32 is greater than a similarity threshold value or not so as to obtain a detection result 1-33 of whether the video 1-11 and the video to be compared are repeated videos or not based on the judgment result.
Based on the exemplary video detection process described above, since the obtained audio fingerprint is a "chromaprint" feature, and the "chromaprint" feature is mainly processed by using music theory to segment the signal spectrum, in lecture, training, a large number of tv plays, fact type videos, and the like, the audio part is mainly distinguished by the human voice part and not by the music theory part, and thus, videos that are identical in nature (title and picture) and have different degrees of differences in audio volume, sound quality, and sound color still cannot be identified; thus, it is impossible to accurately detect videos whose pictures and titles are similar but whose audios are different. Therefore, the accuracy of video detection is low.
Based on this, embodiments of the present application provide a video detection method, apparatus, device and computer-readable storage medium, which can improve the accuracy of video detection, and an exemplary application of the video detection device provided in embodiments of the present application is described below, where the video detection device provided in embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device) and may also be implemented as a server. In the following, an exemplary application will be explained when the video detection apparatus is implemented as a server.
Referring to fig. 2, fig. 2 is a schematic diagram of an alternative architecture of a video detection system provided in an embodiment of the present application; as shown in fig. 2, to support a video detection application, in the video detection system 100, a terminal 400 (illustratively, a terminal 400-1 and a terminal 400-2 are shown, where the terminal 400-1 is a video production device and the terminal 400-2 is a content consumption device) is connected to a server 200 (video detection device) through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two. In addition, the video detection system 100 further includes a database 500 for providing data support to the server 200 when the server 200 performs video detection.
The terminal 400-1 is configured to send the video to be detected to the server 200 through the network 300 when receiving the video to be detected through the publishing control; and is also used to receive the video detection result transmitted by the server 200 through the network 300.
The server 200 is configured to obtain a video to be detected sent by the terminal 400-1 through the network 300; based on the video to be detected, video recall is performed from a video resource library through a database 500 to obtain a video to be compared; separating audio information corresponding to the video to be detected to obtain audio to be detected, and extracting the characteristics of the audio to be detected on the audio characteristics to obtain audio fingerprints to be detected, wherein the audio characteristics are the characteristics of the audio information on the sense of hearing; separating audio information corresponding to the video to be compared to obtain audio to be compared, and extracting the characteristics of the audio to be compared on the audio characteristics to obtain an audio fingerprint to be compared; and comparing the audio fingerprint to be detected with the audio fingerprint to be compared, and determining a video detection result of the video to be detected aiming at the video to be compared based on the comparison result, wherein the video detection result is a detection result of whether the video to be detected is a repeated video aiming at the video to be compared. And is further configured to send the video to be detected to the terminal 400-2 through the network 300 based on the video detection result, and send the video detection result to the terminal 400-1 through the network 300.
And the terminal 400-2 is configured to receive the video to be detected sent by the server 200 through the network 300, and play the video to be detected on the graphical interface.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present invention.
Referring to fig. 3, fig. 3 is a schematic diagram of a component structure of a server in fig. 2 according to an embodiment of the present disclosure, where the server 200 shown in fig. 3 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in server 200 are coupled together by a bus system 240. It is understood that the bus system 240 is used to enable communications among the components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 240 in fig. 3.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
The memory 250 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 250 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 including: bluetooth, Wi-Fi, and Universal Serial Bus (USB), etc.;
a presentation module 253 to enable presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., a display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the video detection apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 illustrates a video detection apparatus 255 stored in a memory 250, which may be software in the form of programs and plug-ins, and includes the following software modules: video capture module 2551, video recall module 2552, feature capture module 2553, video detection module 2554, and result processing module 2555, which are logical and thus can be arbitrarily combined or further split depending on the functionality implemented. The functions of the respective modules will be explained below.
In other embodiments, the video detection apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the video detection apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to perform the video detection method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
In the following, the video detection method provided by the embodiment of the present application will be described in conjunction with an exemplary application and implementation of the server provided by the embodiment of the present application.
Referring to fig. 4, fig. 4 is an alternative flowchart of a video detection method provided in an embodiment of the present application, which will be described with reference to the steps shown in fig. 4.
S401, acquiring a video to be detected.
In the embodiment of the application, when the video production end device receives a video issued by a user, for example, a video produced by PGC, MCN or UGC, the video detection device also obtains the video to be detected.
It should be noted that the video to be detected is a video to be subjected to duplicate removal detection, and the duplicate removal detection is to detect whether the video to be detected and the published video form a duplicate video; and the video to be detected comprises audio information; the video to be detected may be a short video type video, or may also be other videos of a non-short video type, and the like, which is not specifically limited in this embodiment of the application. In addition, a communication connection is established between the video production end device and the video detection device, and the communication connection may be a direct communication connection or a communication connection established through an intermediate device, which is not specifically limited in this embodiment of the present application.
S402, based on the video to be detected, video recalling is carried out from the video resource library, and the video to be compared is obtained.
In the embodiment of the application, the video detection device can acquire a video which is issued by a user before (before the video to be detected is issued, namely before the video to be detected is acquired), or the video which is issued by the user before is stored in the video detection device, namely a video resource library; therefore, after the video detection equipment obtains the video to be detected, in order to perform duplicate removal detection on the video to be detected, the video meeting the similarity condition with the video to be detected is recalled from the video resource library, and the video to be compared is obtained based on the recalled video.
It should be noted that the video resource library is a set formed by videos obtained before the video to be detected is obtained; in addition, the video recall can be performed based on the title and/or picture of the video to be detected (at this time, the video to be compared is the video meeting the similarity condition with the video to be detected on the title and/or picture), and can also be performed based on other contents of the video to be detected; the video to be compared may be a recalled video, may also be any one of the recalled videos, and may also be a partial video in the recalled video, which is not specifically limited in this embodiment of the application. The similarity condition is, for example, greater than the recall similarity threshold and equal to or less than the recall similarity threshold.
S403, separating audio information corresponding to the video to be detected to obtain audio to be detected, and extracting the characteristics of the audio to be detected on the audio characteristics to obtain the audio fingerprint to be detected.
In the embodiment of the present application, the recalled video and the video to be detected are videos that are compared (for example, similarity corresponding to a title and/or a picture is less than or equal to a recall similarity threshold), and in order to perform deduplication processing on the video to be detected, the video detection device performs video detection processing from an audio aspect. Therefore, the video detection device separates the audio information in the video to be detected (for example, the audio information is separated by using FFmpeg), and the separated audio information is the audio to be detected. And then, the video detection equipment extracts the characteristics of the audio to be detected on the audio characteristics, and the extracted characteristics are the audio fingerprints to be detected.
It should be noted that the audio characteristic is an auditory characteristic of the audio information, and includes one or more of volume, tone quality (or pitch), and tone color; the feature on the audio characteristic refers to a feature perceptually perceiving sound, and is an acoustic feature, such as MFCC (Mel-scale Frequency Cepstral Coefficients) feature. In terms of hearing, when the tone quality/pitch is different, the sound quality/pitch corresponds to different hearing sensitivities, for example, human ears have different hearing sensitivities to sound waves with different frequencies, and speech signals from 200Hz (hertz) to 5000Hz have different influences on the definition of speech; when the volume is different, the sound level is also corresponding to different hearing sensitivities, for example, when two sounds with different loudness act on human ears, the presence of the frequency component with higher loudness affects the feeling of the frequency component with lower loudness, so that the frequency with lower loudness becomes less noticeable (this phenomenon is called masking effect); in addition, lower frequency sounds travel a greater distance up the cochlear basilar membrane than higher frequency sounds, so generally bass tends to mask treble, while treble masks bass more difficult; and the critical bandwidth of the sound masking at the low frequency is smaller than that at the high frequency, so that the input signal is filtered by a group of band-pass filters according to the critical bandwidth in the frequency band from the low frequency to the high frequency, the signal energy output by each band-pass filter is used as the basic characteristic of the signal, and the characteristic can be used as the input characteristic of the voice after being further processed. Moreover, human ears have different perceptions to sounds with different frequencies, MFCC characteristics do not make any assumption or limitation on input signals, and auditory characteristics are utilized. Therefore, the video detection method based on the "chromaprint" feature has the problem that it is not possible to identify the audio having different degrees of difference in volume, sound quality and timbre, and the audio part is mainly distinguished in the human voice part in a large number of dramas and fact-type videos excluding simple music due to the wide variety of videos. Therefore, the MFCC features are widely used in the speech recognition field; the MFCC features are based on the Mel frequency, so that the extracted features can emphasize a low-frequency low-amplitude part more and accord with the frequency distribution rule of human voice better; in summary, auditory features such as MFCC features can accurately identify audio that varies in volume, tone quality, and timbre.
S404, separating audio information corresponding to the video to be compared to obtain audio to be compared, and extracting the characteristics of the audio to be compared on the audio characteristic to obtain the audio fingerprint to be compared.
It should be noted that the feature, which is used for comparing the video to be detected with the video to be compared, corresponding to the video to be detected is a feature of the audio characteristic of the audio to be detected, and therefore, the video detection device needs to perform processing similar to that of S403 on the video to be compared to obtain the feature, which is used for comparing the video to be detected with the video to be detected, corresponding to the video to be compared, on the audio characteristic, which is not described herein again in this embodiment of the application.
Here, the audio to be compared is the audio information of the video to be compared, and the audio fingerprint to be compared is the characteristic of the audio to be compared on the audio characteristic. In addition, S403 and S404 are not in sequence in execution order.
S405, comparing the audio fingerprint to be detected with the audio fingerprint to be compared, and determining a video detection result of the video to be detected aiming at the video to be compared based on the comparison result.
In the embodiment of the present application, the video detection device obtains the characteristics of the video to be detected and the video to be compared respectively on the audio characteristics: after the audio fingerprint to be detected and the audio fingerprint to be compared are detected, the audio fingerprint to be detected and the audio fingerprint to be compared are detected through comparison, the comparison of the video to be detected and the video to be compared is also completed, and the video detection result of the video to be detected and the video to be compared is determined.
It should be noted that the comparison result refers to a comparison result between the audio fingerprint to be detected and the audio fingerprint to be compared, and the comparison result represents the similarity between the audio fingerprint to be detected and the audio fingerprint to be compared; the video detection result is a detection result of whether the video to be detected is the repeated video or not for the video to be compared, and may be a detection result of whether the video to be detected is the repeated video or not for the video to be compared.
It can be understood that, when the video to be detected is compared with the recalled video to be compared, the video detection result of whether the video to be detected is the repeated video or not is determined by comparing the characteristics of the audio information of the video to be detected on the audio characteristic with the characteristics of the audio information of the video to be compared on the audio characteristic; the audio characteristics are auditory characteristics of audio information, such as volume, tone quality and tone color, and videos such as "a series of lectures videos", and are different in audio characteristics; therefore, it is possible to accurately identify whether or not a video having a similar picture or title, such as "a series of videos of lectures", is a repeated video, and thus, it is possible to improve the accuracy of video detection.
It can be further understood that the video detection method provided by the embodiment of the application also realizes layering of video recall and video duplicate removal, so that the video recall amount can be increased, the number of subsequent audits (such as manual audits) can be reduced, and the efficiency of video audits is improved; and the video recall and the video de-emphasis are respectively optimized.
In the embodiment of the present application, in S403, the video detection device extracts features of the audio to be detected on the audio characteristics to obtain the audio fingerprint to be detected, including S4031 to S4033, and the following steps are respectively described.
S4031, extracting a plurality of sub-audio to be detected from the audio to be detected based on the preset frame unit.
It should be noted that, a preset frame unit is preset in the video detection device, or the video detection device can acquire the preset frame unit, where the preset frame unit refers to the size of one frame of audio, and may be time (for example, 1 second), the number of included sampling points, and the like, and this is not specifically limited in this embodiment of the present application.
In the embodiment of the application, the video detection device performs framing processing on the audio to be detected based on the preset frame unit, namely, multiple frames of sub-audio to be detected are extracted from the audio to be detected; that is, the audio to be detected includes a plurality of frames of sub audio to be detected.
S4032, extracting the characteristics of each frame of sub audio to be detected in the multi-frame sub audio to be detected on the audio characteristics, and obtaining the initial sub audio to be detected fingerprint.
In the embodiment of the application, when the video detection device performs feature extraction on audio characteristics, the feature extraction is performed with a frame granularity, so that after a plurality of frames of sub-audio to be detected are obtained, the feature extraction on the audio characteristics is performed for each frame of sub-audio to be detected in the plurality of frames of sub-audio to be detected, and then the features of one frame of sub-audio to be detected on the audio characteristics, namely the initial sub-audio to be detected fingerprint, are obtained.
Here, when the video detection device completes feature extraction on the audio characteristics of each sub-to-be-detected audio, a multiframe initial sub-to-be-detected audio fingerprint corresponding to a multiframe sub-to-be-detected audio is obtained, and the multiframe sub-to-be-detected audio corresponds to the multiframe initial sub-to-be-detected audio fingerprint one to one.
S4033, dimensionality reduction is carried out on the initial sub audio fingerprint to be detected to obtain a sub audio fingerprint to be detected, and therefore a multi-frame sub audio fingerprint to be detected corresponding to the audio to be detected is obtained.
Considering that each frame of initial sub-audio fingerprint to be detected corresponds to a large occupied space, the video detection device performs dimension reduction processing on the initial sub-audio fingerprint to be detected, and the original sub-audio fingerprint to be detected after dimension reduction is the sub-audio fingerprint to be detected; when the acquisition of the sub-to-be-detected audio fingerprint corresponding to each frame of initial sub-to-be-detected audio fingerprint is completed, the multi-frame sub-to-be-detected audio fingerprint corresponding to the multi-frame initial sub-to-be-detected audio fingerprint and the multi-frame sub-to-be-detected audio fingerprint corresponding to the to-be-detected audio fingerprint are also obtained. Here, the multiframe initial sub-audio fingerprints to be detected correspond to the multiframe sub-audio fingerprints to be detected one to one, and the audio fingerprints to be detected comprise the multiframe sub-audio fingerprints to be detected. In addition, the video detection device can perform dimension reduction processing on the initial sub audio fingerprint to be detected through clustering, key feature extraction and other modes.
It can be understood that, by framing the audio to be detected, the video detection is performed from the frame granularity, and the granularity of the video detection is fine, so that the accuracy of the video detection can be improved.
In the embodiment of the present application, S4031 further includes S4034; that is to say, before the video detection device extracts multiple sub-frames of audio to be detected from the audio to be detected, the video detection method further includes S4034, which is described below.
S4034, pre-emphasis processing is carried out on the audio to be detected, and the audio to be framed is obtained.
It should be noted that the pre-emphasis process is to pass the audio to be detected through a high-pass filter to enhance the high frequency part of the audio to be detected. Here, the audio to be framed is the pre-emphasis processed audio to be detected.
Illustratively, the pre-emphasis process may be implemented by equation (2), where equation (2) is:
H(z)=1-μz-1 (2)
where z is the audio to be detected, h (z) is the audio to be framed, and μ is the pre-emphasis parameter, which is between 0.9 and 1.0, usually 0.97.
Correspondingly, in this embodiment of the present application, in S4031, the extracting, by the video detection apparatus, multiple frames of sub-audios to be detected from the audios to be detected based on a preset frame unit includes: the video detection equipment extracts multi-frame sub-audio to be detected from the audio to be framed based on a preset frame unit. That is to say, when the video detection device performs framing if the audio to be detected is subjected to pre-emphasis processing before performing framing, the audio to be framed obtained by the pre-emphasis processing is framed.
In the embodiment of the present application, the video detection device in S4031 extracts, based on a preset frame unit, multiple sub-frames of audio to be detected from the audio to be detected, including S40311 and S40312, and each step is described below.
S40311, sampling the audio to be detected based on a preset sampling frequency to obtain a plurality of sampling points.
In the embodiment of the application, a preset sampling frequency is preset in the video detection device, or the video detection device can acquire the preset sampling frequency, wherein the preset sampling frequency is the sampling frequency for the audio to be detected when framing processing is performed; here, the video detection apparatus samples the audio to be detected based on a preset sampling frequency, and a sampling result obtained when the sampling is completed, i.e., a plurality of sampling points.
S40312, sequentially selecting and combining sampling points with a preset number of sampling points from a first sampling point among the plurality of sampling points to form a frame of sub audio to be detected, and continuously selecting and combining the sampling points with the preset number of sampling points to form a next frame of sub audio to be detected from a position corresponding to a preset overlapping sampling point before a selection end position of the sampling point until the plurality of sampling points are selected and processed, thereby obtaining a plurality of frames of sub audio to be detected.
It should be noted that, after the video detection device obtains a plurality of sampling points, the sampling points in the plurality of sampling points are combined to obtain a frame of audio. Here, a preset sampling point number is preset in the video detection device, or the video detection device can acquire the preset sampling point number, where the preset sampling point number refers to the number of sampling points included in one frame of video; the video detection device starts from the first sampling point of the multiple sampling points, selects the sampling points with preset sampling points to combine into one frame of sub-audio to be detected, and selects the sampling points with preset sampling points to form the next frame of sub-audio to be detected from the position overlapping with the frame of sub-audio to be detected by preset overlapping sampling points, so that the sampling points are combined continuously until all the sampling points of the multiple sampling points are selected, and all the obtained sub-audio to be detected is multi-frame of sub-audio to be detected.
Here, the preset frame unit is determined based on a preset sampling frequency and a preset number of sampling points; for example, when the audio to be detected is sampled based on the preset sampling frequency, if the number of the preset sampling points is 256 or 512, the preset frame unit is 20 to 30 milliseconds. In addition, the number of preset overlapped sampling points is smaller than the number of preset sampling points, for example, the number of preset overlapped sampling points is 1/2 or 1/3 of the number of preset sampling points.
It can be understood that when the audio to be detected is subjected to framing, the audio to be detected is divided into a plurality of sub audio to be detected with overlapping, so that the change among the sub audio to be detected of a plurality of frames is gentle, and therefore, the characteristics of rich audio special effects can be extracted; therefore, when the video detection is carried out based on the extracted features, the precision of the video detection can be improved.
In the embodiment of the application, S4032 may be implemented by S40321-S40325; that is to say, the video detection device extracts the characteristics of each frame of sub audio to be detected in the multiple frames of sub audio to be detected on the audio characteristics, and obtains the initial sub audio fingerprints to be detected, including S40321-S40325, and the following steps are described separately.
S40321, windowing each frame of sub audio to be detected in the multi-frame sub audio to be detected to obtain sub audio to be converted.
In the embodiment of the application, in order to increase the continuity of each frame of sub audio to be detected, the video detection device performs windowing processing on each frame of sub audio to be detected, and each frame of sub audio to be detected after windowing processing is the sub audio to be converted; here, after the video detection device completes windowing of each frame of sub audio to be detected, a multi-frame sub audio to be converted corresponding to the multi-frame sub audio to be detected is obtained, and the multi-frame sub audio to be detected corresponds to the multi-frame sub audio to be converted one to one.
The windowing process is to make the time domain signal better satisfy the periodicity requirement of the frequency domain transform process, and reduce the leakage.
Illustratively, the windowing process may be implemented by equation (3), where equation (3) is:
S’(n)=S(n)×W(n),n=0,1,……N-1 (3)
wherein, S (n) is the nth frame of sub audio to be detected, w (n) is a window function corresponding to the nth frame of sub audio to be detected, and S' (n) is the nth frame of sub audio to be transformed; wherein N is the number of preset sampling points. Here, W (n) is represented by formula (4):
where a is a windowing parameter, e.g., 0.46; different values of a will result in different values of W (n).
S40322, the sub audio to be converted is converted into energy distribution on a frequency domain to obtain a sub frequency spectrum to be detected, and a power spectrum of the sub frequency spectrum to be detected is obtained to obtain a sub power spectrum to be detected.
It should be noted that, because the characteristics of the sub audio to be transformed are usually difficult to see due to the transformation of the sub audio to be transformed in the time domain, the video detection device converts the sub audio to be transformed into the energy distribution in the frequency domain for observation; different energy distributions represent different audio characteristics. After windowing, the video detection apparatus further performs frequency domain Transform (e.g., DFT) on the sub-to-be-transformed audio to obtain energy distribution in the frequency domain, i.e., sub-to-be-detected spectrum. Next, the video detection device obtains a power spectrum of the sub-to-be-detected frequency spectrum, and then obtains the sub-to-be-detected power spectrum; here, when the power spectrum of the sub-to-be-detected spectrum is obtained, the power spectrum may be obtained by taking a modulo square of the sub-to-be-detected spectrum, or may be obtained by taking an absolute value of the sub-to-be-detected spectrum, or may be obtained by taking a square of the sub-to-be-detected spectrum.
Exemplarily, the frequency domain transform may be implemented by equation (5), where equation (5) is:
wherein k is the sub-spectrum to be detected of the kth frame, and j is the complex imaginary part.
S40323, smoothing the power spectrum to be detected to obtain a sub-smoothed power spectrum.
Illustratively, the video detection apparatus inputs the sub power spectrum to be detected into a set of Mel-scale triangular filters for smoothing, and the frequency response of the set of Mel-scale triangular filters is shown in formula (6):
wherein M is the number of the triangular filters; f (m-1), f (m) and f (m +1) are all central frequencies, are determined based on the highest frequency and the lowest frequency of the power spectrum to be detected, and the corresponding relation between the Mel frequency and the audio frequency is referred to as an expression (7), wherein the expression (7) is as follows:
Mel(f)=2595×lg(1+f/700) (7)
wherein Mel (f) is Mel frequency, f is Xa(k) The corresponding frequency.
Hm(k) Is the frequency response of the kth sample point for the mth triangular filter, where equation (8) is satisfied:
thus, the sub-smoothed power spectrum is obtained by equation (9), where equation (9) is:
wherein y (1) … … y (m) … … y (M) is the sub-smoothed power spectrum.
S40324, inverse transformation is carried out on the logarithmic energy of the sub-smooth power spectrum, and audio characteristic parameters of preset orders of the inverse transformation result are obtained.
Note that the inverse Transform is a Transform performed with respect to the frequency domain change in S40322, that is, the video detection apparatus transforms the sub-smoothed power spectrum into information in the frequency domain, and the inverse Transform is, for example, a Discrete Cosine Transform (DCT). Here, the audio characteristic parameter is, for example, an MFCC coefficient.
Illustratively, the inverse transformation may be implemented by equation (10), equation (10):
wherein, c (l) is the MFCC coefficient of the l order in the audio characteristic parameter; l is a predetermined order, e.g., 12, 16, etc.
S40325, differential parameters of the audio characteristic parameters and frame energy of each frame of the sub audio to be detected are obtained, and accordingly initial sub audio fingerprints to be detected including one or more of the audio characteristic parameters, the differential parameters and the frame energy are obtained.
It should be noted that the audio characteristic parameter is a static characteristic of the audio, and in order to obtain a feature of the audio characteristic with high accuracy, the video detection device extracts a differential parameter from the audio characteristic parameter to obtain a dynamic characteristic of the audio. In addition, the frame energy is the volume of the sub audio to be detected, for example, the frame energy may be obtained by taking the sum of squares of the volumes of the sub audio to be detected as a base 10 and then multiplying the sum by 10. For example, the initial sub-audio fingerprint to be detected includes: the N-dimensional MFCC parameter (N/3MFCC coefficient + N/3 first order difference parameter + N/3 second order difference parameter) + frame energy.
Illustratively, the obtaining of the difference parameter may be implemented by equation (11), where equation (11) is:
where V is the time difference of the first derivative, e.g., 1 or 2; dlIs the first order difference parameter.
When the differential parameter includes the first order differential parameter and the second order differential parameter, the acquisition mode of the second order differential parameter is obtained with reference to equation (11).
In the embodiment of the application, S4033 may be implemented by S40331-S40333; that is to say, the video detection device performs dimensionality reduction on the initial sub audio fingerprint to be detected to obtain sub audio fingerprints to be detected, including S40331-S40333, and the following steps are respectively described.
S40331, removing lowest frequency features of each initial sampling point audio fingerprint in the initial sub audio fingerprints to be detected to obtain S-1 dimensional features.
It should be noted that the initial sub-audio fingerprint to be detected includes an initial sampling point audio fingerprint with a preset number of sampling points, each initial sampling point audio fingerprint includes an S-dimensional feature, and S is a positive integer greater than 1.
S40332, clustering and dimensionality reduction are carried out on the S-1 dimensional features based on the preset category number, and clustering categories of the preset category number are obtained.
It should be noted that a preset category number is preset in the video detection device, or the video detection model can acquire the preset category number, where the preset category number is smaller than S-1; therefore, the video detection equipment clusters the S-1 as the features to obtain the cluster categories with the preset number of categories, and dimension reduction of the S-1 dimensional features is realized.
Illustratively, when each initial sample point audio fingerprint includes 12-dimensional floating-point type features (S-dimensional features), the lowest-frequency features are removed, and 11-dimensional floating-point type features, namely 11 × 4 × 2, are obtained8A characteristic of the bit; mixing 11 x 4 x 28Bit features (S-1 dimensional features) are grouped into 256 classes (preset number of classes), i.e., 1 byte features; the 1-byte feature, compared to the 12-dimensional floating-point type feature, has a corresponding reduction in space occupation.
S40333, determining the clustering center information of the clustering category as the sampling point audio fingerprint of each initial sampling point audio fingerprint, so as to obtain the sub-audio fingerprint to be detected corresponding to the initial sub-audio fingerprint to be detected.
It should be noted that after the video detection device obtains the preset number of cluster categories, the cluster center of each cluster category can be used as the audio fingerprint of each sampling point of the audio fingerprint of each initial sampling point; therefore, after the video detection equipment finishes acquiring the audio fingerprints of the sampling points of each initial sampling point audio fingerprint, the audio fingerprints of the sampling points with preset sampling points corresponding to the initial sub-audio fingerprints to be detected can be obtained; and the sub-audio fingerprint to be detected comprises a sampling point audio fingerprint with preset sampling points.
Referring to fig. 5, fig. 5 is a schematic flow chart of another alternative video detection method provided in the embodiment of the present application; as shown in fig. 5, in the embodiment of the present application, S405 may be implemented by S4051-S4053; that is to say, the video detection device compares the audio fingerprint to be detected with the audio fingerprint to be compared, and determines the video detection result of the video to be detected with respect to the video to be compared based on the comparison result, including S4051-S4053, which is described below for each step.
S4051, comparing each frame of sub audio fingerprints to be detected in the audio fingerprints to be detected with each frame of sub audio fingerprints to be compared in the audio fingerprints to be compared one by one to obtain comparison results corresponding to each frame of sub audio fingerprints to be detected and each frame of sub audio fingerprints to be compared.
In the embodiment of the application, the audio fingerprint to be detected comprises a plurality of frames of sub audio fingerprints to be detected, and the audio fingerprint to be compared also comprises a plurality of frames of sub audio fingerprints to be compared (wherein the plurality of frames of audio fingerprints to be compared); therefore, the video detection device compares each frame of audio fingerprint to be detected in the multi-frame sub audio fingerprints to be detected with each frame of audio fingerprint to be compared in the multi-frame sub audio fingerprints to be compared one by one, and the corresponding information of each frame of audio fingerprint to be detected and each frame of audio fingerprint to be compared also form a comparison result.
It should be noted that the number of frames corresponding to the multi-frame sub-to-be-detected audio fingerprint and the number of frames corresponding to the multi-frame to-be-compared audio fingerprint may be the same or different, and the number of sub-comparison results (comparison information) of the sub-to-be-detected audio fingerprint and the sub-to-be-compared audio fingerprint included in the comparison results is a product value of the number of frames corresponding to the multi-frame sub-to-be-detected audio fingerprint and the number of frames corresponding to the multi-frame to-be-compared audio fingerprint.
S4052, when preset rule information exists in the comparison result, determining that the video to be detected is a video detection result of the repeated video corresponding to the video to be compared.
It should be noted that the preset rule information is a similarity trend between each frame of sub audio fingerprints to be detected and each frame of sub audio fingerprints to be compared, for example, the similarity between a preset number of consecutive sub audio fingerprints to be detected and the sub audio fingerprints to be compared is higher than a threshold.
S4053, when the preset rule information does not exist in the comparison result, determining that the video to be detected is a video detection result of the non-repetitive video for the video to be compared.
S4052 and S4053 are performed in parallel.
In the embodiment of the present application, after S4051, the video detection method further includes S4054-S4057; that is to say, after the video detection device obtains the comparison result corresponding to each frame of sub-audio fingerprint to be detected and each frame of sub-audio fingerprint to be compared, the video detection method further includes S4054-S4057, and the following describes each step separately.
S4054, each frame of sub audio fingerprint to be detected is used as a one-dimensional attribute of the matrix, each frame of sub audio fingerprint to be compared is used as another one-dimensional attribute of the matrix, a comparison result is used as an element of the matrix, and a similar matrix is constructed.
In the embodiment of the application, the video detection device takes each frame of sub audio fingerprint to be detected as a one-dimensional attribute of the matrix, and each frame of sub audio fingerprint to be compared as another dimensional attribute of the matrix, so that a two-dimensional matrix, namely a similar matrix, of the number of frames corresponding to the multi-frame sub audio fingerprint to be detected and the number of frames corresponding to the multi-frame audio fingerprint to be compared is obtained; and each element in the two-dimensional matrix is the similarity value of one frame of sub-to-be-detected audio fingerprint and one frame of sub-to-be-corresponding audio fingerprint.
S4055, converting the similar matrix into a similar matrix map based on the corresponding relation between the preset similar value and the display color.
It should be noted that, the video detection device is preset with a corresponding relationship between preset contrast information and a display color, and when a similarity value is given, the display color corresponding to the given similarity value can be determined; therefore, the video detection device determines and displays the display color corresponding to each similar value in the similar matrix based on the corresponding relation between the preset similar value and the display color, and a similar matrix diagram is obtained.
Here, the magnitude of the similarity value is in a correlation with the shade of the display color; and the similarity matrix map can also be used for displaying so as to realize visual display of video detection. In addition, the similarity matrix diagram can also be other forms of displays corresponding to the similarity values, such as different graphs and the like.
S4056, when the color difference value between each display color at the diagonal position in the similarity matrix image and the preset color is smaller than the color difference threshold value, determining that preset rule information exists in the comparison result.
In the embodiment of the application, after the video detection device obtains the similar matrix map, the color difference between the display colors at the diagonal positions in the similar matrix map is judged; when the color difference value between each display color at the diagonal position and a preset color (for example, the color of 7FFF 00) is smaller than the color difference threshold value, it is determined that preset rule information exists in the comparison result, that is, the audio fingerprint to be detected is similar to the audio fingerprint to be compared, and then the video to be detected and the video to be compared form a repeated video.
Here, the detection of the preset rule information may be implemented by performing classification detection through a neural network, for example, inputting the similar matrix map to a picture classification detection model of CNN + "Xgbboost" to obtain a detection result for the preset rule information.
S4057, when the color difference value between the display color and the preset color at the diagonal position in the similarity matrix image is not smaller than the color difference threshold, determining that no preset rule information exists in the comparison result.
It should be noted that, when the color difference value between the display color and the preset color at the diagonal position in the similar matrix diagram is not less than the color difference threshold, it is determined that there is no preset rule information in the comparison result, that is, the audio fingerprint to be detected is not similar to the audio fingerprint to be compared, and then the video to be detected and the video to be compared form a non-repetitive video.
It can be understood that by constructing a similar matrix diagram of the audio fingerprint to be detected and the audio fingerprint to be compared, and by the line trend formed by the characteristic points in the similar matrix diagram, whether a video detection result of a repeated video is formed can be determined, and the video detection process is simplified.
In the embodiment of the application, S402 can be realized through S4021-S4024; that is to say, the video detection device recalls the video from the video resource library based on the video to be detected to obtain the video to be compared, including S4021 to S4024, and the following steps are respectively described.
S4021, acquiring video recall characteristics corresponding to the video to be detected.
It should be noted that the video recall feature includes one or more of a content semantic feature, a text semantic feature, a title semantic feature, a body semantic feature, a frame image semantic feature, and a cover image semantic feature; the content semantic features are semantic features corresponding to content expressed by the video to be detected; the text semantic features are semantic features corresponding to texts obtained by text recognition of subtitles, caption texts and video frame images in the video to be detected; the title semantic features are semantic features corresponding to the title of the video to be detected; the text semantic features are semantic features corresponding to contents, titles and cover pictures expressed by the video to be detected; the frame image semantic features are semantic features corresponding to key frame images or all frame images of the video to be detected; the cover picture semantic features are semantic features corresponding to a cover picture of the video to be detected. Here, the semantic features are, for example, a "Simhash" vector, a "BERT" vector, an "embedding" vector, and the like.
S4022, obtaining the to-be-recalled features corresponding to the videos in the video resource library.
In the embodiment of the application, the to-be-recalled features comprise one or more of a to-be-recalled content semantic feature, a to-be-recalled text semantic feature, a to-be-recalled title semantic feature, a to-be-recalled text semantic feature, a to-be-recalled frame image semantic feature and a to-be-recalled cover map semantic feature; and, the feature to be recalled corresponds to the video recall feature in the feature type, for example, when the video recall feature is a title semantic feature and a frame image semantic feature, the feature to be recalled is the title semantic feature to be recalled and the frame image semantic feature to be recalled.
S4023, determining target features to be recalled which are similar to the video recall features from the features to be recalled based on recall similarity values between the video recall features and the features to be recalled respectively.
In the embodiment of the present application, a target feature to be recalled that is similar to a video recall feature is a feature to be recalled that corresponds to a recall similarity value greater than (or equal to or less than) a recall similarity threshold. Here, the recall similarity value includes one or more of a euclidean distance, a vector dot product value, and a cosine similarity value.
S4024, taking the video corresponding to the target to-be-recalled feature in the video resource library as a recalled video, so as to obtain a to-be-compared video belonging to the recalled video.
It should be noted that, in the video resource library, the video corresponding to the target feature to be recalled is the video that meets the recall condition (the recall similarity value is greater than (or less than) the recall similarity threshold) with the video to be detected. Here, the recall video includes a video to be compared.
In the embodiment of the application, S4023 may be realized by S40231 and S40232; that is, the video detection apparatus determines target features to be recalled, which are similar to the video recall feature, from the respective features to be recalled, including S40231 and S40232, based on recall similarity values between the video recall feature and the respective features to be recalled, respectively, and the following description is made on the respective steps.
S40231, recall feature indexes corresponding to the video recall features and recall feature indexes corresponding to the to-be-recalled features are obtained.
It should be noted that each feature to be recalled corresponds to each recall feature index one to one; here, the video detection apparatus constructs recall feature indexes corresponding to video recall features and recall feature indexes corresponding to features to be recalled by using a preset search library (e.g., Faiss).
S40232, taking the matching degree of the recall feature index and each recall feature index as a recall similar value, recalling the similar value, and determining target to-be-recalled features similar to the video recall features from each to-be-recalled feature.
It should be noted that the video detection device implements comparison between the video recall feature and each feature to be recalled based on the index, so as to improve the recall efficiency of the video.
Referring to fig. 6, fig. 6 is a schematic view illustrating an alternative interactive flow of a video detection method provided in an embodiment of the present application; as shown in fig. 6, in the embodiment of the present application, S401 may be implemented by S4011 and S4012; that is, the video detection device obtains the video to be detected, including S4011 and S4012, and the following steps are described separately.
S4011, receiving a video detection request sent by a task scheduling device.
It should be noted that the video to be detected issued by the user and received by the video production end device is realized by sending the video to be detected to the video detection device through the task scheduling device; that is to say, when the video production end device receives a video to be detected issued by a user, the task scheduling device generates a video detection request based on a video uploading request sent by the video production end device, and sends the video detection request to the video detection device, so that the video detection device receives the video detection request; the video detection request is generated by the task scheduling device in response to a video uploading request sent by the video production end device, the video detection request is a request for the task scheduling device to perform de-duplication detection on the video to be detected, and the video uploading request is a request for the video production end device to issue the video to be detected.
S4012, in response to the video detection request, obtaining the video to be detected from the content storage device.
It should be noted that, after receiving the video detection request, the video detection device starts to execute a detection process of the video to be detected in response to the video detection request, and obtains the video to be detected from the content storage device based on an instruction of the video detection request to perform deduplication detection.
With continued reference to fig. 6, in the embodiment of the present application, after S405, S406 is further included; that is to say, after the video detection device compares the audio fingerprint to be detected with the audio fingerprint to be compared and determines the video detection result of the video to be detected with respect to the video to be compared based on the comparison result, the video detection method further includes step S406, which is described below.
And S406, when the video detection result is that the video to be detected is a repeated video for the video to be compared, sending the video detection result to subsequent detection equipment.
It should be noted that the video detection device sends the video detection result to the subsequent detection device, so that the subsequent detection device generates a subsequent detection request for the video detection result, and obtains the target detection result of the video to be detected in response to the subsequent detection request, that is, S407.
Here, the subsequent detection request may be a request for re-examining the video to be detected and the examination and approval to be compared through the network model, or may be a request for manually examining and approving the video to be detected and the examination and approval to be compared. In addition, the target detection result can be a recheck result corresponding to the network model, and can also be a manual audit result; and the target detection result is the detection result of whether the video to be detected is the repeated video or not for the video to be compared, which is determined through subsequent auditing.
With continued reference to fig. 6, in the embodiment of the present application, after S405, S408 is further included; that is to say, after the video detection device compares the audio fingerprint to be detected with the audio fingerprint to be compared and determines the video detection result of the video to be detected with respect to the video to be compared based on the comparison result, the video detection method further includes S408, which is described below.
And S408, when the video detection result is that the video to be detected is a non-repetitive video for the video to be compared, sending the video to be detected to the task scheduling equipment.
It should be noted that the video detection device sends the video to be detected to the task scheduling device, so that the task scheduling device pushes the video to be detected to the content consumption device through the content distribution device based on the acquired recommendation information, that is, S409. Here, the task scheduling device pushes the video to be detected to the content consuming device so that the content consuming device plays the video to be detected, i.e., S410.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Referring to fig. 7, fig. 7 is a structural diagram of an exemplary video detection system according to an embodiment of the present disclosure; as shown in fig. 7, in the exemplary video detection system, the system comprises content production terminals 7-101 (video production terminal equipment), uplink and downlink content interface servers 7-102, content databases 7-103, content storage services 7-104 (forming content storage equipment together with the content databases 7-103), scheduling center services 7-105 (task scheduling equipment), video rearrangement services 7-106, retrieval recall services 7-107, audio rearrangement and verification services 7-108, audio fingerprint and other modal vector generation services 7-109, audio extraction services 7-110, video downloading systems 7-111, manual verification systems 7-112, content distribution outlet services 7-113 (forming video detection equipment together with the services 7-106 to 7-111) and content consumption terminals 7-114 (content consumption terminal equipment). Wherein:
the content production end 7-101 comprises a PGC end, a UGC end and an MCN end, and is used for acquiring an uploaded video (to-be-detected video) through a video publishing page or a rear-end video publishing interface, communicating with the uplink and downlink content interface servers 7-102, and sending the uploaded video to the uplink and downlink content interface servers 7-102; and the system is also used for acquiring behavior data of the uploaded video and sending the acquired behavior data of the uploaded video to the background server for statistical analysis. The uploaded video is usually short video or small video obtained by shooting at one shooting end, and the shot video can be subjected to video beautification functions such as background music and filter templates in the shooting process.
The uplink and downlink content interface servers 7 to 102 are used for communicating with the content production terminals 7 to 101, acquiring the uploaded videos, storing meta information (such as video source file size, release time, title, author, cover page map, category, label information and the like) of the uploaded videos into the content databases 7 to 103, and storing entities (such as video source files) of the uploaded videos into the content storage services 7 to 104; and also for sending an audit request (video detection request) for the uploaded video to the dispatch center service 7-105; and also for providing video index information to the content consuming end 7-114.
A content database 7-103 for storing meta information of the uploaded video sent by the uplink and downlink content interface server 7-102; also for updating the data update meta-information based on the meta-information sent by the dispatch center services 7-105; but also for providing meta-information of the video to the video re-ordering service 7-106 (actually to the audio fingerprint and other modality vector generation service 7-109) and to the manual review system 7-112 (not shown in the figure).
The content storage service 7-104 is used for storing the entity of the uploaded video sent by the uplink and downlink content interface server 7-102; but also for providing video source files to the video download system 7-111 and the content consumer 7-114.
The scheduling center service 7-105 is used for receiving an audit request aiming at the uploaded video sent by the uplink and downlink content interface server 7-102, and scheduling the manual audit system 7-112 and the video repetition elimination service 7-106 to carry out video audit; but also for scheduling the content distribution outlet service 7-113 for distributing video.
The video re-arrangement service 7-106 is used for scheduling the retrieval recall service 7-107 to carry out video recall (not shown in the figure) and scheduling the audio re-arrangement verification service 7-108 to carry out audio verification de-duplication (not shown in the figure) so as to realize scheduling of the video download system 7-111.
And the retrieval recall service 7-107 is used for acquiring a vector (audio fingerprint or other modal vector, wherein other modal vectors comprise video recall characteristics and characteristics to be recalled) corresponding to each video through the audio fingerprint and other modal vector generation service 7-109, constructing matching retrieval of the vectors through Faiss based on the vector corresponding to each video, realizing quick recall of similar videos, and sending reading information of the recalled videos to the audio re-verification service 7-108 to acquire the recalled videos.
The audio re-arrangement checking service 7-108 receives the reading information of the recalled videos sent by the retrieval recall service 7-107, reads the recalled videos from the content storage service 7-104 through the video downloading system 7-111, extracts the audio (audio to be compared) of each video (video to be compared) in the recalled videos through the audio extraction service 7-110, and acquires the audio fingerprints (audio fingerprints to be compared) of the extracted audio through the audio fingerprint and other modal vector generation service 7-109; the system is also used for responding to the scheduling of the video rearrangement service 7-106, reading the entity of the uploaded video from the content storage service 7-104 through the video downloading system 7-111, extracting the audio (audio to be detected) of the entity (video to be detected) of the video through the audio extraction service 7-110, and acquiring the audio fingerprint (audio fingerprint to be detected) of the extracted audio through the audio fingerprint and other modal vector generation service 7-109; and the system is also used for comparing the obtained audio fingerprints to realize audio re-checking, when the two videos are determined to be repeated videos, the manual auditing system 7-112 is called, and when the two videos are determined to be non-repeated videos, the feedback is sent to the dispatching center service 7-105 through the video re-checking service 7-106, so that the dispatching center service 7-105 dispatches the content distribution outlet service 7-113 to distribute the videos.
An audio fingerprint and other modality vector generation service 7-109 for generating recall vectors (other modality vectors) and audio fingerprints.
An audio extraction service 7-110 for extracting audio information from the video, e.g. by means of FFmpeg separating the audio information from the video content.
A video download system 7-111 for reading video source files from the content storage service 7-104.
The manual review system 7-112 is used for providing the video to the content consumption end 7-114 through the display page of the content distribution export service 7-113; the system is also used for auditing and filtering contents which cannot be determined and judged by machines such as political sensitivity, pornography, law impermissible and the like by reading data in the content databases 7-103 and the content storage services 7-104, and simultaneously returning the result and the state of manual auditing to the content databases 7-103; and also for labeling and secondary confirmation of video content.
A content distribution outlet service 7-113 for distributing video to content consuming terminals 7-114; the content distribution outlet service 7-113 is for example a recommendation engine, a search engine or an operation platform.
The content consumption terminals 7 to 114 are used for communicating with the uplink and downlink content interface servers 7 to 102 so as to obtain video index information based on the access request and further obtain video source files from the content storage services 7 to 104 based on the obtained video index information; and is further configured to communicate with the content distribution outlet service 7-113 to obtain a distributed video entity, where the video entity is a video source file of a video uploaded by the content production end 7-101, and the source file may be a pushed video or a subscribed video. The method and the device are further used for acquiring behavior data (such as pause information and loading time) of the downloaded video and the played video, and sending the acquired behavior data of the downloaded video and the played video to the background server for statistical analysis. And also for browsing content data in a Feeds stream, including teletext, pictures and video.
The following describes an implementation of the acquisition of audio fingerprints by the audio fingerprint and other modality vector generation service 7-109 of fig. 7. Referring to fig. 8, fig. 8 is a schematic diagram illustrating an exemplary process for obtaining an audio fingerprint according to an embodiment of the present application; as shown in FIG. 8, the exemplary process of obtaining an audio fingerprint includes two parts, feature obtaining 8-1 and feature dimension reduction 8-2; here, the example of obtaining the audio fingerprint of the uploaded video 8-11 is described. In the feature acquisition 8-1, firstly, separating audio information 8-12 from a video 8-11, sequentially performing pre-emphasis (realized by using a formula (2)), framing and windowing (realized by using formulas (3) and (4)) on the audio information 8-12 for 8-13, performing frequency domain transformation 8-14 on each frame of windowed audio by using a formula (5), and taking an absolute value or a square value or a modular square of 8-15 from each frame of audio after the frequency domain transformation to obtain a power spectrum; then, the power spectrum is subjected to Mel filtering 8-16 (realized by formulas (6) and (7)) to obtain frequency response, then logarithm processing 8-17 is carried out on the frequency response and the power spectrum by formula (9), and inverse transformation 8-18 is carried out on the frequency response and the power spectrum after logarithm processing by DCT (realized by formula (10)) to obtain an MFCC coefficient; finally, the differential parameters of the MFCC system are obtained by using the formula (11), so that the dynamic characteristics 8-19 are obtained, and the MFCC characteristics including the MFCC coefficients and the differential parameters are obtained, wherein the MFCC characteristics also can include logarithmic energy, namely, a logarithmic value taking the square sum of the volume of one frame of audio to be the base 10 and then multiplying the logarithmic value by 10. In the feature dimension reduction 8-2, clustering is carried out on the feature dimension of each sampling point of each frame of audio through clustering 8-21, and the clustering center of the clustering category is used as the feature value of each sampling point of each frame of audio, so that the audio fingerprint 8-22 is obtained.
It should be noted that Mel filtering 8-16 in fig. 8 is implemented by a set of Mel-scale triangular filters, and the obtained MFCC coefficients are inverse normal parameters of Mel-scale frequency domain; the Mel calibration describes the non-linear characteristic of the audio frequency, the Mel frequency and the audio frequency have a linear relation, as shown in fig. 9, and a curve 9-1 describes that the Mel frequency and the audio frequency have a positive relation. In addition, referring to fig. 10, fig. 10 is a diagram illustrating a set of filter junctions of an exemplary mel-scale triangular filter provided by an embodiment of the present applicationA fruit schematic; referring to fig. 10, shown is the result of Mel filtering processing including 6 triangular filters having center frequencies f (0) to f (7) and a frequency response H of the triangular filters1(k)、H3(k)、H5(k) And H6(k)。
The following describes the implementation of audio verification by the audio re-ordering verification service 7-108 of fig. 7. After the corresponding audio fingerprint for each frame is obtained by the audio fingerprint and other modality vector generation service 7-109, here, when an audio fingerprint corresponding to 12 frames of audio is obtained for video 8-11, and an audio fingerprint corresponding to 12 frames of audio is also obtained for the video to be compared with video 8-11 (the video to be compared), using the audio re-check service 7-108, for the audio fingerprint corresponding to the 12 frames of audio for video 8-11 and the audio fingerprint corresponding to the 12 frames of audio for the video to be compared to video 8-11, a matrix like FIG. 11-1 as shown in FIG. 11 is constructed, classifying and detecting the matrix similarity graph 11-1 by using a neural network model (for example, a CNN + "Xgbboost" image classification detection model) to judge the similarity and relationship (video detection result) corresponding to the matrix similarity graph 11-1; here, since the matrix in fig. 11 resembles that at the area 11-11 on fig. 11-1, there is a point which coincides with and is close to the diagonal line, the video 8-11 and the video to be compared with the video 8-11 are determined to be a repeated video.
Here, when an audio fingerprint corresponding to 12 frames of audio is obtained for the video 8-11, and an audio fingerprint corresponding to 9 frames of audio is obtained for the video to be compared with the video 8-11 (the video to be compared), an audio re-arrangement verification service 7-108 is used, a matrix similarity map 12-1 as shown in fig. 12 is constructed for the audio fingerprint corresponding to 12 frames of audio of the video 8-11 and the audio fingerprint corresponding to 9 frames of audio of the video to be compared with the video 8-11, and a neural network model is used to perform classification detection on the matrix similarity map 11-1 to determine similarity and relationship (video detection result) corresponding to the matrix similarity map 12-1; here, since the matrix in fig. 11 is similar to that in fig. 12-1, the correlation value distribution is very discrete, as shown by the region 12-11, and it is determined that the video 8-11 and the video to be compared with the video 8-11 are non-repetitive videos.
It can be understood that the video-based audio provided by the embodiment of the application extracts the MFCC features, performs clustering dimension reduction on the extracted MFCC features, and then performs dimension reduction verification on the content similar to recall based on video content, title or cover picture in the process of removing duplicate, so as to improve the accuracy of removing duplicate of the video; and finally, for the extracted comparison information of the MFCC features of each frame of audio, forming an interframe similar alignment graph, and performing classification detection on the interframe similar alignment graph by adopting a neural network, so that the accuracy of checking the repeated video is greatly improved by fully utilizing the frame granularity time sequence relation. In addition, by the video detection method provided by the embodiment of the application, the recall rate of the video with similar titles and/or pictures and obviously different audio contents is increased, the labor input for examining and verifying the contents can be effectively reduced, the number of repeated video contents entering manual examination is greatly reduced, and the processing capacity of video detection is improved.
Continuing with the exemplary structure of the video detection apparatus 255 provided by the embodiments of the present application implemented as software modules, in some embodiments, as shown in fig. 3, the software modules stored in the video detection apparatus 255 of the memory 250 may include:
a video obtaining module 2551, configured to obtain a video to be detected;
the video recall module 2552 is configured to perform video recall from a video resource library based on the video to be detected to obtain a video to be compared;
the feature obtaining module 2553 is configured to separate audio information corresponding to the video to be detected to obtain an audio to be detected, and extract features of the audio to be detected on audio characteristics to obtain an audio fingerprint to be detected, where the audio characteristics are characteristics of the audio information on hearing;
the feature obtaining module 2553 is further configured to separate audio information corresponding to the video to be compared, obtain an audio to be compared, and extract features of the audio to be compared on the audio characteristics, so as to obtain an audio fingerprint to be compared;
a video detection module 2554, configured to compare the audio fingerprint to be detected with the audio fingerprint to be compared, and determine a video detection result of the video to be detected for the video to be compared based on a comparison result, where the video detection result is a detection result of whether the video to be detected is a repeated video for the video to be compared.
In this embodiment of the application, the feature obtaining module 2553 is further configured to extract multiple sub-frames of audio to be detected from the audio to be detected based on a preset frame unit; extracting the characteristics of each frame of sub-audio to be detected in the multi-frame sub-audio to be detected on the audio characteristics to obtain initial sub-audio to be detected fingerprints; and performing dimensionality reduction on the initial sub audio fingerprint to be detected to obtain a sub audio fingerprint to be detected, so as to obtain a multi-frame sub audio fingerprint to be detected corresponding to the audio to be detected, wherein the audio fingerprint to be detected comprises the multi-frame sub audio fingerprint to be detected.
In this embodiment of the application, the feature obtaining module 2553 is further configured to perform pre-emphasis processing on the audio to be detected, so as to obtain the audio to be framed.
In this embodiment of the application, the feature obtaining module 2553 is further configured to extract the multiple frames of sub-audio to be detected from the audio to be framed based on the preset frame unit.
In this embodiment of the application, the feature obtaining module 2553 is further configured to sample the audio to be detected based on a preset sampling frequency to obtain a plurality of sampling points; and selecting a sampling point combination of preset sampling points in sequence from a first sampling point to form a frame of sub-audio to be detected, continuously selecting a position corresponding to a preset overlapped sampling point before the selection ending position of the sampling point, selecting the sampling point combination of the preset sampling points to form a next frame of sub-audio to be detected until the plurality of sampling points are selected and processed, and obtaining the multi-frame sub-audio to be detected, wherein the preset frame unit is determined based on preset sampling frequency and the preset sampling points.
In this embodiment of the application, the feature obtaining module 2553 is further configured to perform windowing on each frame of sub audio to be detected in the multiple frames of sub audio to be detected, so as to obtain sub audio to be converted; converting the sub audio to be converted into energy distribution on a frequency domain to obtain a sub frequency spectrum to be detected, and acquiring a power spectrum of the sub frequency spectrum to be detected to obtain a sub power spectrum to be detected; smoothing the sub-to-be-detected power spectrum to obtain a sub-smooth power spectrum; carrying out inverse transformation on the logarithmic energy of the sub-smooth power spectrum, and acquiring audio characteristic parameters of a preset order of an inverse transformation result; and acquiring the difference parameter of the audio characteristic parameter and the frame energy of each frame of the sub audio to be detected, so as to obtain the initial sub audio fingerprint to be detected, which comprises one or more of the audio characteristic parameter, the difference parameter and the frame energy.
In this embodiment of the application, the feature obtaining module 2553 is further configured to remove lowest frequency features from each initial sampling point audio fingerprint in the initial sub-audio fingerprint to be detected to obtain S-1 dimensional features, where the initial sub-audio fingerprint to be detected includes the initial sampling point audio fingerprint of the preset number of sampling points, each initial sampling point audio fingerprint includes S dimensional features, and S is a positive integer greater than 1; performing clustering dimensionality reduction on the S-1 dimensional features based on a preset category number to obtain a clustering category of the preset category number; and determining the clustering center information of the clustering category as the sampling point audio fingerprint of each initial sampling point audio fingerprint, so as to obtain the sub-audio fingerprint to be detected corresponding to the initial sub-audio fingerprint to be detected, wherein the sub-audio fingerprint to be detected comprises the sampling point audio fingerprint with the preset sampling point number.
In this embodiment of the application, the video detection module 2554 is further configured to compare each frame of sub audio fingerprints to be detected in the audio fingerprints to be detected with each frame of sub audio fingerprints to be compared in the audio fingerprints to be compared one by one, so as to obtain the comparison result corresponding to each frame of sub audio fingerprints to be detected and each frame of sub audio fingerprints to be compared; when preset rule information exists in the comparison result, determining the video detection result of the video to be detected, which is a repeated video for the video to be compared, wherein the preset rule information is a similarity trend between each frame of sub audio fingerprint to be detected and each frame of sub audio fingerprint to be compared; and when the preset rule information does not exist in the comparison result, determining the video detection result of the video to be detected, which is a non-repetitive video for the video to be compared.
In this embodiment of the application, the video detection module 2554 is further configured to use each frame of sub audio fingerprints to be detected as a one-dimensional attribute of a matrix, use each frame of sub audio fingerprints to be compared as another dimensional attribute of the matrix, use the comparison result as an element of the matrix, and construct a similar matrix; converting the similar matrix into a similar matrix map based on the corresponding relation between a preset similar value and the display color; when the color difference value between each display color at the diagonal position in the similar matrix diagram and a preset color is smaller than a color difference threshold value, determining that the preset rule information exists in the comparison result; and when the color difference value between each display color at the diagonal position in the similarity matrix diagram and a preset color is not less than the color difference threshold value, determining that the preset rule information does not exist in the comparison result.
In this embodiment of the application, the video recall module 2552 is further configured to obtain video recall features corresponding to the video to be detected, where the video recall features include one or more of content semantic features, text semantic features, title semantic features, text semantic features, frame image semantic features, and cover image semantic features; acquiring to-be-recalled features respectively corresponding to each video in the video resource library, wherein the to-be-recalled features correspond to the video recall features in feature types; determining target features to be recalled which are similar to the video recall features from the various features to be recalled based on recall similarity values between the video recall features and the various features to be recalled respectively, wherein the recall similarity values comprise one or more of Euclidean distance, a vector dot product value and a cosine similarity value; and taking the video corresponding to the target to-be-recalled feature in the video resource library as a recalled video, so as to obtain the to-be-compared video belonging to the recalled video.
In this embodiment of the application, the video recall module 2552 is further configured to obtain recall feature indexes corresponding to the video recall features and recall feature indexes corresponding to the features to be recalled, where the features to be recalled and the recall feature indexes are in one-to-one correspondence; and respectively taking the matching degree of the recall feature index and each recall feature index as the recall similarity value, and determining the target to-be-recalled features similar to the video recall feature from each to-be-recalled features on the basis of the recall similarity value.
In this embodiment of the present application, the video obtaining module 2551 is further configured to receive a video detection request sent by a task scheduling device, where the video detection request is generated by the task scheduling device in response to a video upload request sent by a video production end device; and responding to the video detection request, and acquiring the video to be detected from a content storage device.
In this embodiment of the application, the video detection apparatus 255 further includes a result processing module 2555, configured to, when the video detection result is that the video to be detected is a repeated video for the video to be compared, send the video detection result to subsequent detection equipment, so that the subsequent detection equipment generates a subsequent detection request for the video detection result, and obtains a target detection result of the video to be detected in response to the subsequent detection request.
In this embodiment of the application, the video detection device 255 is further configured to, when the video detection result is that the video to be detected is a non-repetitive video for the video to be compared, send the video to be detected to the task scheduling device, so that the task scheduling device pushes the video to be detected to the content consuming device through the content distribution device based on the obtained recommendation information, so that the content consuming device plays the video to be detected.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the video detection method described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform a video detection method provided by embodiments of the present application, for example, the video detection method as shown in fig. 4-6.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the present application, when a video to be detected is compared with a recalled video to be compared, since the characteristics of the audio information of the video to be detected on the audio characteristic are compared with the characteristics of the audio information of the video to be compared on the audio characteristic, it is determined whether the video to be detected is a video detection result of a repeated video with respect to the video to be compared; the audio characteristics are auditory characteristics of audio information, such as volume, tone quality and tone color, and videos such as "a series of lectures videos", and are different in audio characteristics; therefore, it is possible to accurately identify whether or not a video having a similar picture or title, such as "a series of videos of lectures", is a repeated video, and thus, it is possible to improve the accuracy of video detection. In addition, due to the fact that layering is achieved through video recall and video duplication elimination check, the video recall and the video duplication elimination check can be adjusted and optimized respectively, more videos can be recalled, the overall processing capacity of video detection is improved, the number of videos audited manually is reduced, and video auditing efficiency and consumption are improved.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种目标检索方法、装置及存储介质