Video searching method, system, server and client

文档序号:7783 发布日期:2021-09-17 浏览:50次 中文

1. A video search method is applied to a server and comprises the following steps:

under the condition of receiving a short video sent by a client, carrying out face recognition on a decorator character of the short video to obtain a character label of the short video;

searching long videos meeting a first condition from all pre-stored long videos; the first condition is: the character label marked in advance on the long video covers the character label of the short video;

performing voice recognition on the character dubbing of the short video to obtain a voice tag of the short video;

searching for long videos meeting a second condition from the long videos meeting the first condition; the second condition is: the voice tag marked in advance for the long video covers the voice tag of the short video;

extracting a picture with the longest playing time from each frame of the short video to be used as a key picture of the short video;

performing target detection on the key picture to obtain an object label of the short video;

searching for a long video meeting a third condition from the long videos meeting the first condition and the second condition; the third condition is: the object label marked in advance in the long video covers the object label of the short video;

and sending the playing address of the long video meeting the first condition, the second condition and the third condition to the client.

2. The method of claim 1, wherein the performing face recognition on the decorator characters of the short video to obtain the character labels of the short video comprises:

performing face recognition on the exhibition character of the short video to obtain the identity of the exhibition character and the number of times of the exhibition character;

sequencing each of the exhibition figures according to the sequence of the appearance times from high to low to obtain a figure sequence;

selecting front n decorative characters in the character sequence as main characters; n is a positive integer;

and marking the identity of the main person as a person label of the short video.

3. The method of claim 1, wherein the performing voice recognition on the human voice of the short video to obtain the voice tag of the short video comprises:

carrying out voice recognition on the character dubbing of the short video to obtain the sound characteristics and the lines of the decoratied characters;

performing clustering analysis on the sound characteristics and the lines to obtain keyword voices;

and marking the keyword voice as a voice tag of the short video.

4. The method according to claim 1, wherein the performing the object detection on the key picture to obtain the object label of the short video comprises:

inputting the key picture into a preset target detection model to obtain object characteristics output by the target detection model; the object characteristics include a type of object;

and marking the type of the object as an object label of the short video.

5. The method according to claim 1, wherein before sending the play address of the long video satisfying the first condition, the second condition, and the third condition to the client, the method further comprises:

performing content identification on the short video to obtain the content style of the short video;

searching for a long video meeting a fourth condition from the long videos meeting the first condition, the second condition and the third condition; the fourth condition is: the content style of the long video which is marked in advance is the same as that of the short video;

the sending the play address of the long video meeting the first condition, the second condition and the third condition to the client includes:

and sending the playing address of the long video meeting the first condition, the second condition, the third condition and the fourth condition to the client.

6. A video search method is applied to a client and comprises the following steps:

under the condition that a search request of a user is received, uploading the short video indicated by the search request to a server; the server is used for searching the playing address of the long video to which the short video belongs;

and displaying the playing address to the user through a preset interface under the condition of receiving the playing address sent by the server.

7. A server, comprising:

the face recognition unit is used for carrying out face recognition on the decorator characters of the short video under the condition of receiving the short video sent by the client to obtain the character labels of the short video;

the first searching unit is used for searching the long video meeting the first condition from the pre-stored long videos; the first condition is: the character label marked in advance on the long video covers the character label of the short video;

the voice recognition unit is used for carrying out voice recognition on the character dubbing of the short video to obtain a voice tag of the short video;

the second searching unit is used for searching the long video meeting the second condition from the long videos meeting the first condition; the second condition is: the voice tag marked in advance for the long video covers the voice tag of the short video;

the picture extracting unit is used for extracting a picture with the longest playing time from each frame of picture of the short video as a key picture of the short video;

the target detection unit is used for carrying out target detection on the key picture to obtain an object label of the short video;

a third searching unit, configured to search for a long video that satisfies a third condition from among long videos that satisfy the first condition and the second condition; the third condition is: the object label marked in advance in the long video covers the object label of the short video;

and the sending unit is used for sending the playing address of the long video meeting the first condition, the second condition and the third condition to the client.

8. The server of claim 7, further comprising:

the content identification unit is used for carrying out content identification on the short video to obtain the content style of the short video;

a fourth searching unit, configured to search for a long video satisfying a fourth condition from among long videos satisfying the first condition, the second condition, and the third condition; the fourth condition is: the content style of the long video which is marked in advance is the same as that of the short video;

the sending unit is further configured to send the play address of the long video meeting the first condition, the second condition, the third condition, and the fourth condition to the client.

9. A client, comprising:

the device comprises an uploading unit, a searching unit and a processing unit, wherein the uploading unit is used for uploading the short video indicated by a searching request to a server under the condition that the searching request of a user is received; the server is used for searching the playing address of the long video to which the short video belongs;

and the display unit is used for displaying the play address to the user through a preset interface under the condition of receiving the play address sent by the server.

10. A video search system, comprising:

a server and a client;

the client is used for uploading the short video indicated by the search request to the server under the condition of receiving the search request of the user;

the server is configured to:

performing face recognition on the decoratior characters of the short video to obtain character labels of the short video;

searching long videos meeting a first condition from all pre-stored long videos; the first condition is: the character label marked in advance on the long video covers the character label of the short video;

performing voice recognition on the character dubbing of the short video to obtain a voice tag of the short video;

searching for long videos meeting a second condition from the long videos meeting the first condition; the second condition is: the voice tag marked in advance for the long video covers the voice tag of the short video;

extracting a picture with the longest playing time from each frame of the short video to be used as a key picture of the short video;

performing target detection on the key picture to obtain an object label of the short video;

searching for a long video meeting a third condition from the long videos meeting the first condition and the second condition; the third condition is: the object label marked in advance in the long video covers the object label of the short video;

sending the playing address of the long video meeting the first condition, the second condition and the third condition to the client;

the client is further used for displaying the playing address to the user through a preset interface.

Background

The existing short video platforms release more and more episode clips, and the author of the short video usually does not publish the name of the episode to the outside in order to attract traffic, so that the user cannot search the original video to which the video clip (i.e. the short video) belongs by using characters (i.e. keywords, such as the name of the episode).

At present, the prior art discloses a method for video retrieval through video segments, which can retrieve the original video to which an episode clip belongs by using the video segments. However, the method utilizes a maximum matching algorithm and an optimal matching algorithm, which are complex and consume long time, so that the efficiency of video search is reduced.

Disclosure of Invention

The application provides a video searching method, a video searching system, a server and a client, and aims to improve the efficiency of video searching.

In order to achieve the above object, the present application provides the following technical solutions:

a video search method is applied to a server and comprises the following steps:

under the condition of receiving a short video sent by a client, carrying out face recognition on a decorator character of the short video to obtain a character label of the short video;

searching long videos meeting a first condition from all pre-stored long videos; the first condition is: the character label marked in advance on the long video covers the character label of the short video;

performing voice recognition on the character dubbing of the short video to obtain a voice tag of the short video;

searching for long videos meeting a second condition from the long videos meeting the first condition; the second condition is: the voice tag marked in advance for the long video covers the voice tag of the short video;

extracting a picture with the longest playing time from each frame of the short video to be used as a key picture of the short video;

performing target detection on the key picture to obtain an object label of the short video;

searching for a long video meeting a third condition from the long videos meeting the first condition and the second condition; the third condition is: the object label marked in advance in the long video covers the object label of the short video;

and sending the playing address of the long video meeting the first condition, the second condition and the third condition to the client.

Optionally, the performing face recognition on the exhibition character of the short video to obtain the character tag of the short video includes:

performing face recognition on the exhibition character of the short video to obtain the identity of the exhibition character and the number of times of the exhibition character;

sequencing each of the exhibition figures according to the sequence of the appearance times from high to low to obtain a figure sequence;

selecting front n decorative characters in the character sequence as main characters; n is a positive integer;

and marking the identity of the main person as a person label of the short video.

Optionally, the performing voice recognition on the character dubbing of the short video to obtain the voice tag of the short video includes:

carrying out voice recognition on the character dubbing of the short video to obtain the sound characteristics and the lines of the decoratied characters;

performing clustering analysis on the sound characteristics and the lines to obtain keyword voices;

and marking the keyword voice as a voice tag of the short video.

Optionally, the performing target detection on the key picture to obtain the object tag of the short video includes:

inputting the key picture into a preset target detection model to obtain object characteristics output by the target detection model; the object characteristics include a type of object;

and marking the type of the object as an object label of the short video.

Optionally, before sending the play address of the long video meeting the first condition, the second condition, and the third condition to the client, the method further includes:

performing content identification on the short video to obtain the content style of the short video;

searching for a long video meeting a fourth condition from the long videos meeting the first condition, the second condition and the third condition; the fourth condition is: the content style of the long video which is marked in advance is the same as that of the short video;

the sending the play address of the long video meeting the first condition, the second condition and the third condition to the client includes:

and sending the playing address of the long video meeting the first condition, the second condition, the third condition and the fourth condition to the client.

A video search method is applied to a client and comprises the following steps:

under the condition that a search request of a user is received, uploading the short video indicated by the search request to a server; the server is used for searching the playing address of the long video to which the short video belongs;

and displaying the playing address to the user through a preset interface under the condition of receiving the playing address sent by the server.

A server, comprising:

the face recognition unit is used for carrying out face recognition on the decorator characters of the short video under the condition of receiving the short video sent by the client to obtain the character labels of the short video;

the first searching unit is used for searching the long video meeting the first condition from the pre-stored long videos; the first condition is: the character label marked in advance on the long video covers the character label of the short video;

the voice recognition unit is used for carrying out voice recognition on the character dubbing of the short video to obtain a voice tag of the short video;

the second searching unit is used for searching the long video meeting the second condition from the long videos meeting the first condition; the second condition is: the voice tag marked in advance for the long video covers the voice tag of the short video;

the picture extracting unit is used for extracting a picture with the longest playing time from each frame of picture of the short video as a key picture of the short video;

the target detection unit is used for carrying out target detection on the key picture to obtain an object label of the short video;

a third searching unit, configured to search for a long video that satisfies a third condition from among long videos that satisfy the first condition and the second condition; the third condition is: the object label marked in advance in the long video covers the object label of the short video;

and the sending unit is used for sending the playing address of the long video meeting the first condition, the second condition and the third condition to the client.

Optionally, the method further includes:

the content identification unit is used for carrying out content identification on the short video to obtain the content style of the short video;

a fourth searching unit, configured to search for a long video satisfying a fourth condition from among long videos satisfying the first condition, the second condition, and the third condition; the fourth condition is: the content style of the long video which is marked in advance is the same as that of the short video;

the sending unit is further configured to send the play address of the long video meeting the first condition, the second condition, the third condition, and the fourth condition to the client.

A client, comprising:

the device comprises an uploading unit, a searching unit and a processing unit, wherein the uploading unit is used for uploading the short video indicated by a searching request to a server under the condition that the searching request of a user is received; the server is used for searching the playing address of the long video to which the short video belongs;

and the display unit is used for displaying the play address to the user through a preset interface under the condition of receiving the play address sent by the server.

A video search system, comprising:

a server and a client;

the client is used for uploading the short video indicated by the search request to the server under the condition of receiving the search request of the user;

the server is configured to:

performing face recognition on the decoratior characters of the short video to obtain character labels of the short video;

searching long videos meeting a first condition from all pre-stored long videos; the first condition is: the character label marked in advance on the long video covers the character label of the short video;

performing voice recognition on the character dubbing of the short video to obtain a voice tag of the short video;

searching for long videos meeting a second condition from the long videos meeting the first condition; the second condition is: the voice tag marked in advance for the long video covers the voice tag of the short video;

extracting a picture with the longest playing time from each frame of the short video to be used as a key picture of the short video;

performing target detection on the key picture to obtain an object label of the short video;

searching for a long video meeting a third condition from the long videos meeting the first condition and the second condition; the third condition is: the object label marked in advance in the long video covers the object label of the short video;

sending the playing address of the long video meeting the first condition, the second condition and the third condition to the client;

the client is further used for displaying the playing address to the user through a preset interface.

According to the technical scheme, under the condition that the short video sent by the client is received, face recognition is carried out on the decorator characters of the short video, and character labels of the short video are obtained. And searching the long video meeting the first condition from the pre-stored long videos. The first condition is: the character labels marked in advance on the long video cover the character labels of the short video. And carrying out voice recognition on the character dubbing of the short video to obtain a voice tag of the short video. And searching for the long video meeting the second condition from the long videos meeting the first condition. The second condition is: and the voice tag of the long video which is labeled in advance covers the voice tag of the short video. And extracting the picture with the longest playing time from each frame picture of the short video to be used as a key picture of the short video. And carrying out target detection on the key picture to obtain an object label of the short video. And searching for the long video meeting the third condition from the long videos meeting the first condition and the second condition. The third condition is: the object label marked in advance in the long video covers the object label of the short video. And sending the playing address of the long video meeting the first condition, the second condition and the third condition to the client. Compared with the prior art, a series of operations such as face recognition, voice recognition and target detection are performed, the algorithm difficulty is low, the calculation efficiency of the algorithm is high, and therefore the efficiency of video search can be effectively improved by the scheme.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1a is a schematic architecture diagram of a video search system according to an embodiment of the present application;

fig. 1b is a schematic diagram of a video search method according to an embodiment of the present application;

fig. 1c is a schematic diagram of another video search method provided in the embodiment of the present application;

fig. 1d is a schematic diagram of another video search method provided in the embodiment of the present application;

fig. 2 is a schematic diagram of another video search method provided in an embodiment of the present application;

fig. 3 is a schematic architecture diagram of a server according to an embodiment of the present application;

fig. 4 is a schematic diagram of another video search method provided in the embodiment of the present application;

fig. 5 is a schematic diagram of an architecture of a client according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

As shown in fig. 1a, an architecture diagram of a video search system provided in the embodiment of the present application includes:

a server 100 and a client 200.

The interaction process between the server and the client, as shown in fig. 1b, 1c, and 1d, includes the following steps:

s101: and in the case of receiving a search request of a user, transmitting the short video indicated by the search request to the server.

Wherein the short video comprises a video clip of an episode video.

S102: the server carries out face recognition on the exhibition characters of the short videos to obtain the identities of the exhibition characters and the times of appearance of the exhibition characters.

Wherein the identity of the exhibition character comprises the real name of the actor.

S103: the server sorts the exhibition characters according to the sequence of the times of departure from high to low to obtain character sequences.

S104: the server selects the front n decorative characters in the character sequence as main characters.

Wherein n is a positive integer.

S105: the server labels the identity of the main character as a character tag for the short video.

S106: the server acquires the long video of the character tag covering the short video from the database and stores the acquired long video into the data table.

Wherein the long video includes an episode video that is not clipped. The data table is used for recording the preset number and the playing address of the long video.

It should be noted that the character tags of the long video can be manually pre-labeled, or labeling of the character tags can be implemented through the process described in S102-S104 above.

S107: and the server performs voice recognition on the character dubbing of the short video to obtain the voice characteristics and the lines of the decorated characters.

Sound characteristics include, but are not limited to, pitch, loudness, and timbre.

S108: and the server performs cluster analysis on the sound characteristics and the lines of the decoratied characters to obtain keyword voices.

S109: and the server marks the keyword voice as a voice tag of the short video.

S110: and the server judges whether the voice label of the long video covers the voice label of the short video or not aiming at each long video in the data table.

If the voice tag of the long video covers the voice tag of the short video, S111 is performed, otherwise S112 is performed.

The voice tag of the long video may be manually pre-labeled, or the labeling of the voice tag may be implemented through the process described in S107-S109 above.

S111: and the server identifies the content of the short video to obtain the content style of the short video.

After execution of S111, execution of S113 is continued.

S112: the server removes the long video from the data table.

S113: the server determines, for each long video in the data table, whether the content style of the long video is the same as the content style of the short video.

If the content style of the long video is the same as the content style of the short video, S114 is performed, otherwise S115 is performed.

The content style of the long video may be manually pre-labeled, or the labeling of the content style is implemented through the process described in S111 above.

S114: the server identifies the playing time length of each frame of picture in the short video.

After execution of S114, execution continues with S116.

S115: the server removes the long video from the data table.

S116: and the server selects the picture with the longest playing time from the frames of pictures as the key picture of the short video.

S117: and the server inputs the key picture into a preset target detection model to obtain the object characteristics output by the target detection model.

The object feature includes a type of the object, for example, if the key picture contains wine, and the type of the wine is "red wine", the object feature output by the target detection model is "red wine".

It should be noted that the target detection model is common knowledge familiar to those skilled in the art, and will not be described herein.

S118: the server labels the type of the object as an object tag for short video.

S119: and the server judges whether the object label of the long video covers the object label of the short video or not aiming at each long video in the data table.

If the object label of the long video covers the object label of the short video, S120 is executed, otherwise S121 is executed.

The object label of the long video can be manually pre-labeled, or the labeling of the object label is realized through the process described in S116-S117 above.

S120: and the server sends the playing address of the long video to the client.

S121: the server removes the long video from the data table.

S122: and the client displays the playing address of the long video to the user through a preset interface.

In summary, the user uploads the short video to the server through the client, and the server performs a series of operations such as face recognition, voice recognition, target detection and the like on the short video to obtain a tag of the short video (i.e., a character tag, a voice tag and an object tag), and matches the tag of the short video with the tag of the long video, so as to determine the scenario video to which the short video belongs (i.e., the long video whose tag covers the tag of the short video). Compared with the prior art, a series of operations such as face recognition, voice recognition, target detection and the like have lower algorithm difficulty and higher algorithm calculation efficiency, and therefore, the scheme can effectively improve the video search efficiency. In addition, through the matching of the character tag, the voice tag and the object tag, the essence is that the main characters of the video, the key word voice and the object features of the key pictures are used as reference bases for judging whether the long video is the episode video of the short video, so that the objectivity is high, and the accuracy of the video search result can be effectively improved.

As shown in fig. 2, a schematic diagram of a video search method provided in an embodiment of the present application is applied to a server, and includes the following steps:

s201: and under the condition of receiving the short video sent by the client, carrying out face recognition on the decorator characters of the short video to obtain the character labels of the short video.

S202: and searching the long video meeting the first condition from the pre-stored long videos.

Wherein the first condition is: the character labels marked in advance on the long video cover the character labels of the short video.

S203: and carrying out voice recognition on the character dubbing of the short video to obtain a voice tag of the short video.

S204: and searching for the long video meeting the second condition from the long videos meeting the first condition.

Wherein the second condition is: and the voice tag of the long video which is labeled in advance covers the voice tag of the short video.

S205: and extracting the picture with the longest playing time from each frame picture of the short video to be used as a key picture of the short video.

S206: and carrying out target detection on the key picture to obtain an object label of the short video.

S207: and searching for the long video meeting the third condition from the long videos meeting the first condition and the second condition.

Wherein the third condition is: the object label marked in advance in the long video covers the object label of the short video.

Optionally, content identification can be performed on the short video to obtain the content style of the short video; searching a long video meeting a fourth condition from the long videos meeting the first condition, the second condition and the third condition; the fourth condition is that: the content style of the long video which is marked in advance is the same as that of the short video.

S208: and sending the playing address of the long video meeting the first condition, the second condition and the third condition to the client.

Optionally, the play address of the long video meeting the first condition, the second condition, the third condition, and the fourth condition may also be sent to the client.

In summary, compared with the prior art, a series of operations such as face recognition, voice recognition and target detection have low algorithm difficulty and high algorithm calculation efficiency, and therefore, the efficiency of video search can be effectively improved by using the scheme of the embodiment. In addition, through the matching of the character tag, the voice tag and the object tag, the essence is that the main characters of the video, the key word voice and the object features of the key pictures are used as reference bases for judging whether the long video is the episode video of the short video, so that the objectivity is high, and the accuracy of the video search result can be effectively improved.

Corresponding to the video search method provided by the embodiment of the application, the embodiment of the application also correspondingly provides a server.

As shown in fig. 3, an architecture diagram of a server provided in the embodiment of the present application includes:

the face recognition unit 301 is configured to perform face recognition on the exhibition character of the short video to obtain a character tag of the short video when the short video sent by the client is received.

The face recognition unit 301 is specifically configured to: carrying out face recognition on the exhibition characters of the short video to obtain the identities of the exhibition characters and the times of appearance of the exhibition characters; sequencing each exhibition character according to the sequence of the appearance times from more to less to obtain character sequences; selecting front n decorative characters in the character sequence as main characters; n is a positive integer; the identity of the main character is labeled as the character tag of the short video.

A first searching unit 302, configured to search for a long video meeting a first condition from pre-stored long videos. The first condition is: the character labels marked in advance on the long video cover the character labels of the short video.

And a voice recognition unit 303, configured to perform voice recognition on the character dubbing of the short video to obtain a voice tag of the short video.

The voice recognition unit 303 is specifically configured to: carrying out voice recognition on character dubbing of the short video to obtain the sound characteristics and lines of the decorated characters; performing clustering analysis on the sound characteristics and the lines to obtain keyword voices; and marking the keyword voice as a voice label of the short video.

A second searching unit 304, configured to search for a long video satisfying a second condition from among the long videos satisfying the first condition. The second condition is: and the voice tag of the long video which is labeled in advance covers the voice tag of the short video.

A picture extracting unit 305, configured to extract a picture with the longest playing time from each frame picture of the short video as a key picture of the short video.

And the target detection unit 306 is configured to perform target detection on the key picture to obtain an object tag of the short video.

The target detection unit 306 is specifically configured to: inputting the key picture into a preset target detection model to obtain object characteristics output by the target detection model; the object characteristics include a type of the object; the type of the object is labeled as an object label for short video.

A third searching unit 307, configured to search for a long video satisfying a third condition from among the long videos satisfying the first condition and the second condition. The third condition is: the object label marked in advance in the long video covers the object label of the short video.

And the content identification unit 308 is configured to perform content identification on the short video to obtain a content style of the short video.

A fourth searching unit 309, configured to search for a long video satisfying a fourth condition from among long videos satisfying the first condition, the second condition, and the third condition; the fourth condition is that: the content style of the long video which is marked in advance is the same as that of the short video.

A sending unit 310, configured to send the play address of the long video meeting the first condition, the second condition, and the third condition to the client.

Wherein, the sending unit 310 is further configured to: and sending the playing address of the long video meeting the first condition, the second condition, the third condition and the fourth condition to the client.

In summary, compared with the prior art, a series of operations such as face recognition, voice recognition and target detection have low algorithm difficulty and high algorithm calculation efficiency, and therefore, the efficiency of video search can be effectively improved by using the scheme of the embodiment. In addition, through the matching of the character tag, the voice tag and the object tag, the essence is that the main characters of the video, the key word voice and the object features of the key pictures are used as reference bases for judging whether the long video is the episode video of the short video, so that the objectivity is high, and the accuracy of the video search result can be effectively improved.

In addition, as shown in fig. 4, a schematic diagram of another video search method provided in the embodiment of the present application is applied to a client, and includes the following steps:

s401: and in the case of receiving a search request of a user, uploading the short video indicated by the search request to a server.

The server is used for searching the playing address of the long video to which the short video belongs.

S402: and under the condition of receiving the playing address sent by the server, displaying the playing address to the user through a preset interface.

In summary, a user uploads a short video to a server through a client, the server searches for a playing address of a long video to which the short video belongs, and after the playing address fed back by the server is received, the playing address is displayed to the user through a preset interface, the client does not undertake video searching work, and the client is borne by the server with higher computing power, so that the video searching efficiency is remarkably improved.

Corresponding to the video search method provided by the embodiment of the application, the embodiment of the application also provides a client.

As shown in fig. 5, an architecture diagram of a client provided in the embodiment of the present application includes:

an uploading unit 501, configured to, in a case that a search request of a user is received, upload a short video indicated by the search request to a server; the server is used for searching the playing address of the long video to which the short video belongs.

The presentation unit 502 is configured to, in a case that the play address sent by the server is received, present the play address to the user through a preset interface.

In summary, a user uploads a short video to a server through a client, the server searches for a playing address of a long video to which the short video belongs, and after the playing address fed back by the server is received, the playing address is displayed to the user through a preset interface, the client does not undertake video searching work, and the client is borne by the server with higher computing power, so that the video searching efficiency is remarkably improved.

The functions described in the method of the embodiment of the present application, if implemented in the form of software functional units and sold or used as independent products, may be stored in a storage medium readable by a computing device. Based on such understanding, part of the contribution to the prior art of the embodiments of the present application or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including several instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.

The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种基于MapReduce的分布式XSLT处理方法及处理系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!