Method and device for processing voice semantic model

文档序号:8738 发布日期:2021-09-17 浏览:48次 中文

1. A processing method of a speech semantic model is characterized by comprising the following steps:

deploying a voice semantic model;

synchronizing the menu standard question file to an application program of a target tenant so that the application program of the target tenant matches a local menu file and the menu standard question file, generating a menu association file, and loading the menu association file to enable the voice semantic model;

wherein the local menu file at least comprises: menu name and menu identification.

2. The method of claim 1, wherein synchronizing the menu criteria file to the application of the target tenant comprises:

generating a menu standard question file according to application programs of different types of target tenants, wherein the menu standard question file at least comprises the following steps: menu name, standard question mark;

and synchronizing the menu standard question file to the corresponding application program of the target tenant according to a preset pushing mode.

3. The method according to claim 1 or 2, further comprising, after enabling the speech semantic model:

acquiring voice information received by an application program of the target tenant;

recognizing the voice information according to the voice semantic model to obtain a real-time message, wherein the real-time message at least comprises: the system comprises a standard question mark, a menu name corresponding to the standard question mark and a confidence coefficient;

and sending the real-time message to an application program of the target tenant, so that the application program of the target tenant matches a corresponding menu identifier from the local menu file according to a standard question identifier in the real-time message, and performing menu display or interface jump according to the menu identifier.

4. The method of claim 3, wherein recognizing the voice information according to the voice semantic model to obtain a real-time message comprises:

inputting the voice information into the voice semantic model, and recognizing a standard question mark corresponding to the voice information by the voice semantic model;

determining a menu name and a confidence corresponding to the standard question mark;

and generating the real-time message according to the standard question mark, the menu name corresponding to the standard question mark and the confidence coefficient.

5. The method of claim 3, prior to deploying the speech semantic model, further comprising:

obtaining an updated corpus table, wherein the updated corpus table at least comprises: an extended question list, a standard question list and a menu name list;

and training the voice semantic model by using the corpora in the updated corpus table, wherein the voice semantic model is respectively trained by using the corpora corresponding to the extended question table, the standard question table and the menu name table, and cross-validation is carried out on the voice semantic model obtained by training based on different corpus tables until the voice semantic model finally used for deployment is obtained.

6. The method according to claim 5, before obtaining the updated corpus table, comprising:

obtaining a corpus file of an application program of the target tenant;

extracting the corpus information in the corpus file, wherein the corpus information at least comprises: the menu comprises a menu name, a standard question and an extension question corresponding to the standard question;

and updating a corpus table according to the corpus information.

7. The method according to claim 5, further comprising, before obtaining the updated corpus table:

setting a local menu file of an application program of the target tenant, wherein the local menu file at least comprises: menu name and menu identification.

8. An apparatus for processing a speech semantic model, comprising:

the deployment module is used for deploying the voice semantic model;

the synchronization module is used for synchronizing the menu standard question file to an application program of a target tenant so that the application program of the target tenant is matched with the local menu file and the menu standard question file, a menu associated file is generated, and the menu associated file is loaded to enable the voice semantic model;

wherein the local menu file at least comprises: menu name and menu identification.

9. A computer-readable storage medium, comprising a stored program, wherein when the program runs, the computer-readable storage medium controls a device to execute the processing method of the speech semantic model according to any one of claims 1 to 7.

10. A processor, configured to execute a program, wherein the program executes the processing method of the speech semantic model according to any one of claims 1 to 7.

Background

In recent years, with the rise of deep learning, speech recognition and natural language processing are rapidly developed, which brings fundamental changes to the interactive mode of financial products, and financial systems and business products, except necessary GUIs, have more dialogues and interactions with users through cui (conversational User interface) and speech or characters. At present, banks have a plurality of client-oriented financial APPs, client-oriented mobile banking, life application, network loan and other businesses, and clerk-oriented mobile business application, financial business handling, comprehensive service application and the like, the complexity of application GUI is represented on the navigation and level, which is determined by the GUI structure, and the functions, menu levels, interactive controls are complex, the user has a deeper searching and using operation level, and the learning cost is very high. The voice is one of the most natural human interaction modes, has the advantages of direct, clear and rapid, and the voice instruction aims to directly reach the menu by utilizing voice recognition and semantic analysis, eliminate the hierarchical relation, reduce the user operation path and improve the user operation convenience. In the intelligent financial era, more and more foreground APPs are required to provide voice instruction service for customers. For the voice language middle station service, all the users can uniformly regard foreground APP tenants as the APP tenants of the voice language middle station service.

In the process that the voice semantic middle desk provides voice instruction service for each APP tenant, the problems encountered are mainly as follows: in the internet finance era, in order to improve user experience, the layout is continuously adjusted and optimized, and the menu is also continuously adjusted, so that the voice semantic middle platform service needs to adapt to frequent changes of menus of all the APP tenants. The technical scheme adopted by the current speech semantic middle station adaptive APP is as follows: and the APP tenant knowledge team is responsible for arranging the newly added menu, changing the menu, deleting the menu and corresponding extension questions and submitting the menus to the voice semantic middle service team. And the voice semantic middle platform model development team is responsible for training the model according to the corpus provided by the tenant, and after the training is finished, the model is put on line and simultaneously the menu and the corresponding standard question ID are provided for the APP tenant application development team. And the APP tenant application development team is responsible for updating the corresponding relation between the standard question ID and the menu ID, manually triggering and loading the corresponding file, starting the new configuration file and enabling the new model to take effect. The problem of doing so is that the manual operation of the APP tenant is closely related to the training of the speech semantic middle stage model, and the APP tenant needs to perform upgrade configuration and reload files after taking the standard question ID of the new speech semantic model to enable the new model to take effect.

In addition, the existing implementation mode has the defect that the manual operation of the APP tenant is highly coupled with the effective speech semantic middle platform model. The specific flow is that the APP tenant provides the menu and the corresponding expansion question of the menu to the voice semantic middle station, the voice semantic middle station performs model training according to the corpus and provides the menu and the corresponding standard question ID to the APP tenant, and the APP tenant performs upgrading configuration and reloads the file after taking the menu and the corresponding standard question ID provided by the voice semantic service, so that the new model can take effect.

In view of the above problems, no effective solution has been proposed.

Disclosure of Invention

The embodiment of the invention provides a processing method and a processing device of a voice semantic model, which are used for at least solving the technical problem that an APP tenant triggers a new voice semantic model to take effect by means of manual operation in the prior art.

According to an aspect of the embodiments of the present invention, there is provided a method for processing a speech semantic model, including: deploying a voice semantic model; synchronizing the menu standard question file to an application program of a target tenant so that the application program of the target tenant matches a local menu file and the menu standard question file, generating a menu association file, and loading the menu association file to enable the voice semantic model; wherein the local menu file at least comprises: menu name and menu identification.

Optionally, synchronizing the menu criteria file to the application of the target tenant includes: generating a menu standard question file according to application programs of different types of target tenants, wherein the menu standard question file at least comprises the following steps: menu name, standard question mark; and synchronizing the menu standard question file to the corresponding application program of the target tenant according to a preset pushing mode.

Optionally, after the speech semantic model is enabled, the method further includes: acquiring voice information received by an application program of the target tenant; recognizing the voice information according to the voice semantic model to obtain a real-time message, wherein the real-time message at least comprises: the system comprises a standard question mark, a menu name corresponding to the standard question mark and a confidence coefficient; and sending the real-time message to an application program of the target tenant, so that the application program of the target tenant matches a corresponding menu identifier from the local menu file according to a standard question identifier in the real-time message, and performing menu display or interface jump according to the menu identifier.

Optionally, recognizing the voice information according to the voice semantic model to obtain a real-time message, including: inputting the voice information into the voice semantic model, and recognizing a standard question mark corresponding to the voice information by the voice semantic model; determining a menu name and a confidence corresponding to the standard question mark; and generating the real-time message according to the standard question mark, the menu name corresponding to the standard question mark and the confidence coefficient.

Optionally, before deploying the speech semantic model, the method further includes: obtaining an updated corpus table, wherein the updated corpus table at least comprises: an extended question list, a standard question list and a menu name list; and training the voice semantic model by using the corpora in the updated corpus table, wherein the voice semantic model is respectively trained by using the corpora corresponding to the extended question table, the standard question table and the menu name table, and cross-validation is carried out on the voice semantic model obtained by training based on different corpus tables until the voice semantic model finally used for deployment is obtained.

Optionally, before obtaining the updated corpus table, the method includes: obtaining a corpus file of an application program of the target tenant; extracting the corpus information in the corpus file, wherein the corpus information at least comprises: the menu comprises a menu name, a standard question and an extension question corresponding to the standard question; and updating a corpus table according to the corpus information.

Optionally, before obtaining the updated corpus table, the method further includes: setting a local menu file of an application program of the target tenant, wherein the local menu file at least comprises: menu name and menu identification.

According to another aspect of the embodiments of the present invention, there is also provided a processing apparatus of a speech semantic model, including: the deployment module is used for deploying the voice semantic model; the synchronization module is used for synchronizing the menu standard question file to an application program of a target tenant so that the application program of the target tenant is matched with the local menu file and the menu standard question file, a menu associated file is generated, and the menu associated file is loaded to enable the voice semantic model; wherein the local menu file at least comprises: menu name and menu identification.

According to another aspect of the embodiment of the present invention, a computer-readable storage medium is further provided, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the processing method of the speech semantic model described in any one of the above.

According to another aspect of the embodiments of the present invention, there is further provided a processor, where the processor is configured to execute a program, where the program executes a processing method of a speech semantic model according to any one of the above.

In the embodiment of the invention, a deployed voice semantic model is adopted, a menu standard question file is synchronized to an application program of a target tenant, so that the application program of the target tenant is matched with a local menu file and a menu standard file to generate a menu associated file, and a voice semantic model is started, and the menu standard question file is synchronized to the application program of the target tenant, so that the application program of the target tenant can start the voice semantic model, and the aim of preventing the APP tenant from triggering a new voice semantic model to take effect by manual operation is fulfilled, so that the model becomes effective in a quasi-real-time manner after being deployed, the technical effect of providing more efficient, uniform and automatic voice instruction service for each APP tenant is achieved, and the technical problem that the APP tenant triggers the new voice semantic model to take effect by manual operation in the prior art is solved.

Drawings

The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:

FIG. 1 is a flow diagram of a method of processing a speech semantic model according to an embodiment of the invention;

FIG. 2 is a schematic diagram of a speech semantic model correlation table according to an alternative embodiment of the present invention;

FIG. 3 is a flow diagram of model training in synchronization with standard question files in accordance with an alternative embodiment of the present invention;

FIG. 4 is a flow diagram of a voice semantic instruction service in accordance with an alternative embodiment of the present invention;

FIG. 5 is a schematic diagram of a processing apparatus of a speech semantic model according to an embodiment of the present invention.

Detailed Description

In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

Example 1

In accordance with an embodiment of the present invention, there is provided an embodiment of a method for processing a speech semantic model, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.

Fig. 1 is a flowchart of a processing method of a speech semantic model according to an embodiment of the present invention, and as shown in fig. 1, the processing method of the speech semantic model includes the following steps:

step S102, deploying a voice semantic model;

step S104, synchronizing the menu standard question file to an application program of a target tenant so that the application program of the target tenant matches the local menu file and the menu standard question file, generating a menu associated file, and loading the menu associated file to start a voice semantic model;

wherein, the local menu file at least comprises: menu name and menu identification.

The menu association file corresponds the menu identification, the standard question identification and the menu name. Optionally, the application program of the target tenant is an APP tenant.

It should be noted that the above embodiment may be applied to a speech semantic middle stage, that is, the above implementation steps are implemented in the speech semantic middle stage.

Through the steps, the application program of the target tenant can be synchronized with the menu standard question file, so that the voice semantic model can be started by the application program of the target tenant, the aim of preventing the APP tenant from triggering the new voice semantic model to take effect by manual operation is fulfilled, the model is taken effect in real time after being deployed, the technical effect of providing more efficient, uniform and automatic voice instruction service for each APP tenant is achieved, and the technical problem that the APP tenant triggers the new voice semantic model to take effect by manual operation in the prior art is solved.

Optionally, synchronizing the menu criteria file to the application of the target tenant includes: generating a menu standard question file according to application programs of different types of target tenants, wherein the menu standard question file at least comprises the following steps: menu name, standard question mark; and synchronizing the menu standard inquiry file to the corresponding application program of the target tenant according to a preset pushing mode.

In an optional implementation manner, a menu name and a newly added/modified standard question mark may be extracted from the menu name table, and then a menu standard question file is generated according to application programs of different types of target tenants, where the menu standard question file includes the menu name, the standard question mark, and the like, and it should be noted that, in a specific implementation process, the menu standard question file includes, but is not limited to, the menu name and the standard question mark.

The preset pushing manner includes, but is not limited to, a lifelong form pushing, a quasi-real-time file form pushing, and the like. For example, the application program for synchronizing the menu standard question file to the corresponding target tenant may be pushed in a day-end form, and the application program for synchronizing the menu standard question file to the corresponding target tenant may also be pushed in a quasi-real-time file form. Through the implementation mode, the menu standard question file can be flexibly synchronized to the application program of the corresponding target tenant.

Optionally, after the speech semantic model is enabled, the method further includes: acquiring voice information received by an application program of a target tenant; recognizing voice information according to the voice semantic model to obtain real-time information; and sending the real-time message to an application program of a target tenant, so that the application program of the target tenant matches a corresponding menu identifier from a local menu file according to a standard question identifier in the real-time message, and performing menu display or interface jump according to the menu identifier.

The real-time message includes, but is not limited to, a standard question mark, a menu name, a corresponding confidence level, and the like.

In an optional implementation manner, firstly, voice information received by an application program of a target tenant needs to be acquired, secondly, the voice information is recognized by using a voice semantic model so as to obtain a real-time message, and then, the real-time message is sent to the application program of the target tenant, wherein the application program of the target tenant can match a corresponding menu identifier from a local menu file according to a standard question identifier in the real-time message, and performs menu display or interface jump according to the menu identifier.

Through the implementation mode, the voice information can be recognized by using the started voice semantic model to obtain the real-time message, and the real-time message is sent to the application program of the target tenant, so that menu display or interface jump of the application program of the target tenant is realized.

Optionally, recognizing the voice information according to the voice semantic model to obtain a real-time message, including: inputting the voice information into a voice semantic model, and identifying a standard question mark corresponding to the voice information by the voice semantic model; determining a menu name and a confidence corresponding to the standard question mark; and generating a real-time message according to the standard question mark, the menu name corresponding to the standard question mark and the confidence coefficient.

The above speech semantic model includes but is not limited to an acoustic model, a language model, a semantic model, and the like.

In an optional implementation manner, after the voice information received by the application program of the target tenant is acquired, the voice information may be converted into words through the acoustic model and the language model, and then converted into an intention, i.e., a standard question identifier through the semantic model, and the standard question identifier, the menu name, and the corresponding confidence of the top 3 of the confidence rank are returned to the application program of the target tenant in a real-time message manner.

Through the implementation mode, the voice information can be converted into the real-time information in real time, and the real-time information can be provided for the application program of the target tenant in time.

Optionally, before deploying the speech semantic model, the method further includes: obtaining an updated corpus table, wherein the updated corpus table at least comprises: an extended question list, a standard question list and a menu name list; and training the voice semantic model by using the corpora in the updated corpus table, wherein the voice semantic model is respectively trained by using the corpora corresponding to the extended query table, the standard query table and the menu name table, and cross-verifying the voice semantic model obtained by training based on different corpus tables until the voice semantic model finally used for deployment is obtained.

The corpus table includes, but is not limited to, an extended query table, a standard query table, a menu name table, etc., and the corpus table can be updated in real time, and the updated corpus table also includes, but is not limited to, an extended query table, a standard query table, a menu name table, etc. It should be noted that the contents of the extended question table include, but are not limited to, standard question marks, extended questions, etc.; the content of the standard question table includes but is not limited to standard question mark, tenant mark, standard question, etc.; the contents of the menu name table include, but are not limited to, a menu name, a tenant identity, a standard question identity, and the like.

In an alternative embodiment, the training of the speech semantic model using the corpora in the updated corpus table includes: training a voice semantic model by using the corpus corresponding to the extended question table to obtain a first training model; training a voice semantic model by using the corpus corresponding to the standard question table to obtain a second training model; training a voice semantic model by using the corpus corresponding to the menu name table to obtain a third training model; and performing cross validation on the first training model, the second training model and the third training model respectively to finally obtain a speech semantic model for deployment.

Through the implementation mode, the recognition precision and accuracy of the trained voice semantic model can be improved.

Optionally, before obtaining the updated corpus table, the method includes: obtaining a corpus file of an application program of a target tenant; extracting the corpus information in the corpus file, wherein the corpus information at least comprises: the menu name, the standard question and the extension question corresponding to the standard question; and updating the corpus table according to the corpus information.

The corpus file of the application program of the target tenant contains corpus information, wherein the corpus information includes, but is not limited to, a menu name, a standard question, a corresponding extended question, and the like. In the specific implementation process, the corpus information such as the menu name, the standard question and the corresponding extended question in the corpus file can be extracted, and the corpus table is updated according to the corpus information. The corpus table includes, but is not limited to, an extended question table, a standard question table, a menu name table, and the like.

By the implementation mode, the corpus information extracted from the corpus file of the application program of the target tenant can be used for updating the corpus table in time, so that a model obtained by subsequently using the corpus table for training can adapt to a complex and changeable application scene.

Optionally, before obtaining the updated corpus table, the method further includes: setting a local menu file of an application program of a target tenant, wherein the local menu file at least comprises the following components: menu name and menu identification.

In an optional implementation manner, before obtaining the updated corpus table, a local menu file of an application program of a target tenant may be preset, and in a specific implementation process, the local menu file includes, but is not limited to, a menu name, a menu identifier, and the like.

An alternative embodiment of the invention is described in detail below.

The selectable implementation mode of the invention is divided into three parts of voice semantic model training, data file synchronization and voice semantic instruction real-time service. Specifically, after the voice semantic middle platform service model is deployed, menu names and corresponding standard question IDs are synchronized to APP tenants in a quasi-real-time mode, the APP tenants automatically match local menu files with received menu standard files, and menu associated files are automatically generated, so that a new model is started. In addition, the voice semantic middle platform can provide semantic instruction service for multiple APP tenants at the same time, and the APP tenants have no influence and no perception. Different menu standard question files can be generated by the station to multiple APP tenants in the voice semantic instruction and are synchronized to the different APP tenants, and the multiple APP tenants can simultaneously start a new model.

Firstly, training a speech semantic model comprises the following two steps:

(1) the APP tenant knowledge team provides corpora. The corpus is subdivided into menu names, standard questions and corresponding extension questions, for example, a first-level menu of 'account loss', the standard questions are 'loss-reporting bank cards', the extension questions include the following 'i want to report a card loss', 'how to do the bank card is lost', 'how to report the bank card', 'my card cannot be found', 'whether the mobile phone bank can temporarily report the loss', and the like. The APP tenant knowledge team is responsible for arranging the menu names, the standard questions and the corresponding extension questions into files and submitting the files to the voice semantic middle knowledge team. And the APP tenant development team locally configures a menu file according to the sorted linguistic data, wherein the menu file comprises two columns of a menu name and a menu ID.

(2) The speech semantic center station further processes the corpus submitted by the front-end APP, and updates three tables, as shown in fig. 2, specifically: a. standard question sheet: a standard question ID, a standard question and an APP tenant ID corresponding table; b. expanding a question table: standard question ID and extension question corresponding table; c. menu name table: menu name, APP tenant ID, standard question ID correspondence table. The speech semantic middle desk trains the model with the processed corpora, as shown in fig. 3, and after cross-validation of the model, the model is released to be online.

Secondly, data file synchronization, as shown in fig. 3, can be divided into the following two steps:

(1) the database extracts the menu name and the newly added/modified standard question ID from the menu name table, and different menu files are respectively generated according to different APP tenants, wherein the menu files comprise two columns of the menu name and the standard question ID. And pushing the data to a corresponding APP tenant in a daily final form or a quasi-real-time file form through an enterprise data bus.

(2) And the APP tenant starts the model after receiving a menu standard inquiry file sent by the voice semantic middle service according to the pre-configured local menu file. And matching the local menu file with the menu standard question file through menu name association, and corresponding the menu ID, the standard question ID and the menu name to generate a menu association file.

Thirdly, the online application, as shown in fig. 4, can be divided into the following three steps:

(1) after receiving the voice of the user, the APP tenant forwards the voice to the voice semantic middling station.

(2) The voice semantic middle desk converts the voice of the user into characters through an acoustic model and a language model, converts the characters into intentions through a semantic model, namely standard question IDs, and returns the standard question IDs and menu names with the confidence degree ranking 3 and the corresponding confidence degrees to APP tenants in a real-time message mode.

(3) After receiving the real-time message, the APP tenant finds out a corresponding menu ID according to the received standard inquiry ID in combination with the menu file, and menu display or interface skipping is achieved.

In the above embodiment, the menu name can be used as an association link between the voice semantic middle service and the APP tenant, and replaces the offline transfer standard question ID of the voice semantic middle service and the APP tenant, so that the APP tenant and the voice semantic middle service can be decoupled, and the APP tenants can be isolated and have no influence and no perception on each other.

In addition, the implementation mode can realize quasi-real-time effect after deployment of the new model in the voice semantics, can provide efficient, uniform and automatic voice semantic instruction service for each APP tenant, decouples the APP tenant and the voice semantic instruction service, achieves automation of model effect, and can also provide isolatable and personalized service for each tenant, so that the tenants can really realize no perception and uninterrupted service.

In another alternative embodiment, the menu ID of each APP tenant may be used as a standard question ID of the speech semantic middle station, instead of the menu name, as a link associated between the speech semantic middle station and the APP tenant. Through the implementation mode, the APP tenant does not need to do any configuration work once submits the linguistic data, the APP tenant gives client input once the voice semantic model is deployed, the voice semantic middle stage directly returns the menu ID, and the APP tenant does not need to do any conversion work.

It should be noted that once the menu ID changes, the primary key of the voice semantic middle stage relative to the data table of the tenant changes, and a full regression test is required, which may increase the workload of the regression test of the voice semantic middle stage.

Example 2

According to another aspect of the embodiments of the present invention, there is also provided a processing apparatus of a speech semantic model, fig. 5 is a schematic diagram of the processing apparatus of the speech semantic model according to the embodiments of the present invention, as shown in fig. 5, the processing apparatus of the speech semantic model includes: a deployment module 52 and a synchronization module 54. The following describes the processing apparatus of the speech semantic model in detail.

A deployment module 52 for deploying the speech semantic model; a synchronization module 54, connected to the deployment module 52, configured to synchronize the menu standard question file to the application program of the target tenant, so that the application program of the target tenant matches the local menu file and the menu standard question file, generate a menu association file, and load the menu association file to enable the voice semantic model; wherein, the local menu file at least comprises: menu name and menu identification.

It should be noted that the above modules may be implemented by software or hardware, for example, for the latter, the following may be implemented: the modules can be located in the same processor; and/or the modules are located in different processors in an arbitrary processing mode.

In the above embodiment, the processing apparatus of the voice semantic model may synchronize the application program of the target tenant with the menu standard question file, so that the application program of the target tenant may invoke the voice semantic model, and the purpose of avoiding the APP tenant to trigger the new voice semantic model to take effect by manual operation is achieved, thereby achieving the quasi-real-time effect after model deployment, providing more efficient, unified and automatic technical effects of voice instruction service for each APP tenant, and further solving the technical problem that the APP tenant triggers the new voice semantic model to take effect by manual operation in the prior art.

It should be noted here that the deployment module 52 and the synchronization module 54 correspond to steps S102 to S104 in embodiment 1, and the modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure in embodiment 1.

Optionally, the synchronization module 54 includes: the first generation unit is used for generating menu standard question files according to application programs of different types of target tenants, wherein the menu standard question files at least comprise: menu name, standard question mark; and the synchronization unit is used for synchronizing the menu standard question file to the application program of the corresponding target tenant according to a preset pushing mode.

Optionally, the apparatus further comprises: the first acquisition module is used for acquiring the voice information received by the application program of the target tenant after the voice semantic model is started; the recognition module is used for recognizing the voice information according to the voice semantic model to obtain a real-time message, wherein the real-time message at least comprises: the standard question mark, the menu name corresponding to the standard question mark and the confidence coefficient; and the sending module is used for sending the real-time message to the application program of the target tenant so that the application program of the target tenant matches a corresponding menu identifier from the local menu file according to the standard question identifier in the real-time message, and performs menu display or interface jump according to the menu identifier.

Optionally, the identification module includes: the recognition unit is used for inputting the voice information into the voice semantic model, and recognizing a standard question mark corresponding to the voice information by the voice semantic model; the determining unit is used for determining the menu name and the confidence coefficient corresponding to the standard question mark; and the second generating unit is used for generating the real-time message according to the standard question mark, the menu name corresponding to the standard question mark and the confidence coefficient.

Optionally, the apparatus further comprises: a second obtaining module, configured to obtain an updated corpus table before deploying the speech semantic model, where the updated corpus table at least includes: an extended question list, a standard question list and a menu name list; and the training module is used for training the voice semantic model by using the corpora in the updated corpus table, wherein the voice semantic model is respectively trained by using the corpora corresponding to the extended query table, the standard query table and the menu name table, and the voice semantic model obtained by training based on different corpus tables is subjected to cross validation until the voice semantic model finally used for deployment is obtained.

Optionally, the apparatus further comprises: the third obtaining module is used for obtaining the corpus file of the application program of the target tenant before obtaining the updated corpus table; the extraction module is used for extracting the corpus information in the corpus file, wherein the corpus information at least comprises: the menu name, the standard question and the extension question corresponding to the standard question; and the updating module is used for updating the corpus table according to the corpus information.

Optionally, the apparatus further comprises: a setting module, configured to set a local menu file of an application program of a target tenant before obtaining the updated corpus table, where the local menu file at least includes: menu name and menu identification.

Example 3

According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus on which the computer-readable storage medium is located is controlled to execute the processing method of the speech semantic model of any one of the above.

Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network and/or in any one of a group of mobile terminals, and the computer-readable storage medium includes a stored program.

Optionally, the program when executed controls an apparatus in which the computer-readable storage medium is located to perform the following functions: deploying a voice semantic model; and synchronizing the menu standard question file to an application program of the target tenant so that the application program of the target tenant matches the local menu file and the menu standard question file, generating a menu associated file, and loading the menu associated file to start the voice semantic model.

Example 4

According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to run a program, where the program executes a processing method of a speech semantic model of any one of the above items.

The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein the processor executes the program and realizes the following steps: deploying a voice semantic model; and synchronizing the menu standard question file to an application program of the target tenant so that the application program of the target tenant matches the local menu file and the menu standard question file, generating a menu associated file, and loading the menu associated file to start the voice semantic model.

The invention also provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: deploying a voice semantic model; and synchronizing the menu standard question file to an application program of the target tenant so that the application program of the target tenant matches the local menu file and the menu standard question file, generating a menu associated file, and loading the menu associated file to start the voice semantic model.

The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.

In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.

The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:移动终端姿态识别方法、装置和可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!