Terminal voice interaction method and system and corresponding terminal equipment
1. A terminal voice interaction method comprises the following steps:
presenting an interaction scene, wherein the interaction scene comprises one or more interaction objects;
receiving voice instruction operation of a user on an interactive object; and
presenting an operation result of the operation in the interaction scene.
2. The method of claim 1, wherein receiving user manipulation of the interactive object comprises:
and receiving the operation that the user triggers the interactive object to contain the function.
3. The method of claim 2, wherein presenting the operation results of the operation in the interaction scenario comprises:
presenting a triggering result of the function in the interaction scenario.
4. The method of claim 1, wherein the interaction scene includes a voice interaction object and other interaction objects that are voice-controlled by the voice interaction object, and
the receiving of the voice instruction operation of the user on the interactive object comprises the following steps:
and receiving a voice instruction operation of a user on the voice interaction object, wherein the voice instruction operation enables the voice interaction object to control other interaction objects to enable corresponding functions.
5. The method of claim 4, further comprising:
uploading the voice instruction operation to a server;
acquiring the generated voice instruction response result, and
presenting operation results of the operation in the interaction scene comprises:
and presenting the operation result of the operation based on the response result.
6. The method of claim 4, further comprising:
and giving voice feedback of the voice instruction operation to the user.
7. The method of claim 1, further comprising:
presenting an operation prompt aiming at the interactive object.
8. The method of claim 1, further comprising:
and presenting the related information of the operated interactive object.
9. The method of claim 1, further comprising:
and jumping to a promotion page of the operated interactive object.
10. The method of claim 1, further comprising:
and updating the interactive objects in the interactive scene based on the triggering condition.
11. The method of claim 1, further comprising:
and generating an interactive scene for presentation according to the user information.
12. The method of claim 11, wherein the user information comprises at least one of:
user portrait information;
user shopping information;
historical operation information of the user;
selecting information of the interactive scene by the user; and
interactive scene information input by a user.
13. The method of claim 1, wherein presenting an interaction scenario comprises:
displaying a virtual scene page, wherein the virtual scene page comprises one or more virtual interactive objects.
14. The method of claim 1, wherein the interactive object is a virtual object corresponding to an actual good/service, and
presenting operation results of the operation in the interaction scene comprises:
demonstrating the function of the actual goods/services in the interactive scene.
15. A terminal voice interaction system comprises a plurality of terminals and a server, wherein the plurality of terminals are communicated with the server, and,
the server is used for:
issuing an interactive scene, wherein the interactive scene comprises one or more interactive objects;
acquiring the operation of the uploaded user on the interactive object;
an operation response instruction is generated and issued,
the terminal is used for:
receiving and presenting the sent interactive scene;
receiving and uploading the operation of the user on the interactive object;
receiving the issued operation response instruction;
and presenting the operation result of the operation in the interaction scene based on the operation response instruction.
16. The system of claim 15, wherein the terminal is to:
receiving voice instruction operation of a user on an interactive object, wherein the voice instruction operation is used for triggering the interactive object to comprise a function; and
and based on the operation feedback instruction, presenting a triggering result of the function in the interactive scene.
17. The system of claim 16, wherein the server is to:
recognizing the voice content of the voice instruction operation; and
and sending the voice content text to the terminal.
18. The system of claim 17, wherein the server is to:
and generating a response action of the interactive object as the operation response instruction based on the recognized voice content.
19. The system of claim 16, wherein the one or more interactive objects correspond to one or more internet of things devices.
20. The system of claim 15, wherein the server is to:
and pushing the popularization information of the operated interactive object to the terminal.
21. A terminal display method comprises the following steps:
displaying a virtual interaction scene page, wherein the virtual scene page comprises one or more virtual interaction objects; and
and responding to the voice instruction operation of the user on the virtual interactive object, and displaying the operation result of the operation in the virtual scene page.
22. The method of claim 21, wherein, in response to an operation of a virtual interactive object by a user, displaying an operation result of the operation in the virtual scene page comprises:
and in response to the operation that the user triggers the interactive object to contain the function, displaying the interactive object triggering the function in the interactive scene.
23. The method of claim 22, wherein, in response to an operation of a virtual interactive object by a user, displaying an operation result of the operation in the virtual scene page comprises:
and displaying the effect of the interactive object after the interactive object triggers the function in the interactive scene.
24. The method of claim 22, wherein, in response to an operation of a virtual interactive object by a user, displaying an operation result of the operation in the virtual scene page comprises:
and rendering the effect after the function is triggered in the virtual scene page.
25. The method of claim 21, wherein, in response to an operation of a virtual interactive object by a user, displaying an operation result of the operation in the virtual scene page comprises:
and refreshing the page part containing the corresponding virtual interaction object in the virtual interaction scene page.
26. A terminal device, comprising:
the output device is used for presenting an interactive scene, and the interactive scene comprises one or more interactive objects;
the input device is used for receiving the operation of a user on the interactive object;
and the processing device is used for presenting the operation result of the operation in the interactive scene by using the presentation device based on the operation.
27. The terminal device of claim 26, wherein the output means comprises:
a display for displaying the interactive scene.
28. The terminal device according to claim 27, wherein the display is a touch screen and serves as an input means for receiving a touch input operation of the interactive object by the user.
29. The terminal device of claim 26, wherein the input means comprises:
a microphone for receiving a voice instruction operation of the user on the interactive object,
and the output means further comprises:
and the voice input device is used for giving voice feedback of the voice instruction operation and/or giving a voice operation prompt aiming at the interactive object to the user.
30. The terminal device of claim 26, further comprising:
a networking device to:
acquiring the interactive scene;
reporting the operation of the user on the interactive object; and
obtaining an operation response instruction of the operation, wherein the response instruction is used by the processing device to present an operation result of the operation in the interaction scene.
31. A terminal voice interaction method comprises the following steps:
presenting commercial promotion content based on a predetermined condition in a current interaction scenario;
receiving voice instruction operation of a user on the commercial promotion content; and
presenting, in the interaction scenario, an operation result of the operation for the commercial offer.
32. The method of claim 31, wherein the promotional object and/or voice manipulation of the commercial offer is generated based on at least one of:
end user portrait information;
end user shopping information;
end user historical operation information;
selecting information of the interactive scene by the terminal user; and
interactive scene information input by the terminal user.
33. The method of claim 31, wherein the voice instruction is operative to at least one of:
operating on presentation of the commercial offer;
and operating the promotion object contained in the commercial promotion content.
34. The method of claim 31, wherein the predetermined condition comprises at least one of:
starting operation of video playing;
pausing the video playing;
opening an APP; and
voice instructions of the end user.
35. A voice interaction method, comprising:
playing a movie and television play, wherein interactive objects are presented in playing scenes of the movie and television play;
receiving voice instruction operation of a user on an interactive object; and
and presenting the operation result of the operation in the playing scene of the movie and television play.
36. The method of claim 35, further comprising:
and giving a prompt that the interactive objects exist in the playing scene of the movie and television play.
37. The method of claim 35, wherein presenting the operation result of the operation in the play scene of the movie comprises:
and presenting the influence result of the interactive object on the playing scene after the corresponding function is enabled based on the voice instruction operation in the playing scene of the movie and television play.
38. A business promotion screen voice interaction method comprises the following steps:
prompting a user to perform voice interaction;
acquiring voice interaction input of the user;
presenting the commercial promotion content;
receiving voice instruction operation of a user on the commercial promotion content; and
and presenting the operation result of the voice instruction operation on the commercial promotion content.
39. The method of claim 38, wherein prompting the user for voice interaction comprises:
the commerce promotion screen senses that a user is approaching and prompts the user for voice interaction.
40. The method of claim 38, wherein presenting the commercial offers comprises:
acquiring user information of the user;
presenting the commercial offer based on the user information,
wherein the user information comprises at least one of:
obtaining the biological information of the user on site;
voice interaction input of the user is acquired on site; and
and networking the acquired user information based on the user identity information.
Background
In the internet era, various content information is delivered to users in various forms. In order to promote business, various kinds of APP have various operating resource positions. Different operating resource locations carry various types of information that the merchant wants to convey to the user. The content of the promotion information is determined. Even if the promotion information of different products is put in for different users, the promotion information of the products is invariable, and the normal APP operation of the users is usually hindered.
For this reason, an improved information dissemination manner is required.
Disclosure of Invention
The technical problem to be solved by the present disclosure is to provide a terminal voice interaction scheme, which provides a specific interaction scenario and enables a user to operate an interaction object through voice interaction in the interaction scenario, for example, to know an operation effect for the interaction object, thereby improving participation and being capable of more clearly showing various functions of the interaction object.
According to a first aspect of the present disclosure, a terminal voice interaction method is provided, including: presenting an interaction scene, wherein the interaction scene comprises one or more interaction objects; receiving the operation of a user on an interactive object; and presenting an operation result of the operation in the interaction scene.
According to a second aspect of the present disclosure, a terminal voice interaction system is provided, including a plurality of terminals and a server, where the plurality of terminals communicate with the server, and the server is configured to: issuing an interactive scene, wherein the interactive scene comprises one or more interactive objects; acquiring the operation of the uploaded user on the interactive object; generating and issuing an operation response instruction, wherein the terminal is used for: receiving and presenting the sent interactive scene; receiving and uploading the operation of the user on the interactive object; receiving the issued operation response instruction; and presenting the operation result of the operation in the interaction scene based on the operation response instruction.
According to a third aspect of the present disclosure, a terminal display method is provided, including: displaying a virtual interaction scene page, wherein the virtual scene page comprises one or more virtual interaction objects; and responding to the operation of the user on the virtual interactive object, and displaying the operation result of the operation in the virtual scene page.
According to a fourth aspect of the present disclosure, a terminal device is provided, including: the output device is used for presenting an interactive scene, and the interactive scene comprises one or more interactive objects; the input device is used for receiving the operation of a user on the interactive object; and processing means for presenting an operation result of the operation in the interaction scene using the presenting means based on the operation.
According to a fifth aspect of the present disclosure, a terminal voice interaction method is provided, including: presenting commercial promotion content based on a predetermined condition in a current interaction scenario; receiving voice instruction operation of a user on the commercial promotion content; and presenting operation results of the operation on the commercial promotion content in the interaction scene.
According to a sixth aspect of the present disclosure, a voice interaction method is provided, including: playing a movie and television play, wherein interactive objects are presented in playing scenes of the movie and television play; receiving voice instruction operation of a user on an interactive object; and presenting the operation result of the operation in the playing scene of the movie and television play.
According to a seventh aspect of the present disclosure, a business promotion screen voice interaction method is provided, including: prompting a user to perform voice interaction; acquiring voice interaction input of the user; presenting the commercial promotion content; receiving voice instruction operation of a user on the commercial promotion content; and presenting an operation result of the voice instruction operation on the commercial promotion content.
According to the method and the device, the interactive scene is provided for the user, the user operation is received, and the operation result is displayed, so that the user can participate in the promotion process, and various functions of the operation object can be accurately known. The method is particularly suitable for being implemented as an intelligent APP-end voice interaction popularization scheme, namely, the APP-end user can carry out voice interaction in a popularization scene, and the popularization scene presents different content information according to the voice indication of the user, so that the service or product to be popularized is displayed, the participation of the user, the interestingness of information popularization and the putting accuracy are improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 shows a flow diagram of a terminal interaction method according to an embodiment of the present invention.
Fig. 2 shows an example of a terminal interaction system capable of implementing the present invention.
Fig. 3 shows an example of the server participating in the terminal interaction of the present invention.
Fig. 4 is a flowchart illustrating a terminal display method according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a terminal device that can be used to implement the above-described interaction method according to an embodiment of the present invention.
Fig. 6A-6B illustrate an example of a terminal interaction scenario in accordance with the present invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the internet era, various content information is delivered to users in various forms. In order to promote business, various kinds of APP have various operating resource positions. Different operating resource locations carry various types of information that the merchant wants to convey to the user. The content of the promotion information is determined. Even if the promotion information of different products is put in for different users, the promotion information of the products is invariable, and the normal APP operation of the users is usually hindered.
Therefore, the invention provides a terminal interaction scheme, which improves the participation and can show various functions of the interaction object more clearly by providing a specific interaction scene and enabling a user to know the operation effect aiming at the interaction object through active operation in the interaction scene.
Fig. 1 shows a flow diagram of a terminal interaction method according to an embodiment of the present invention. Here, the terminal interaction may refer to an interaction between a terminal device and a user thereof, and thus the terminal interaction method is a method in which the terminal gives a corresponding operation for an input of the user. In different embodiments, the terminal device may be implemented as various types of mobile terminals, for example, a smartphone and a tablet computer with an interactive APP installed, a special AR helmet, and so on.
In step S110, an interactive scene is presented. One or more interactive objects are included in the presented interactive scene. Here, "presenting" refers to making the user aware through various means of perception. In one embodiment, presenting the interaction scene may be displaying an interaction scene page in a display screen, e.g., within a cell phone-installed APP. Alternatively or additionally, the presentation may comprise a sound presentation. At this time, the speaker or the earphone can play corresponding scene sounds, such as music, voice prompts or descriptions, or sounds simulating a real scene (e.g., rain, wind, etc.). In the other two methods, the interactive scene can be presented in other manners such as vibration.
Herein, an "interaction scenario" refers to a context in which a user may operate on one or more objects. For example, to facilitate a user's operation of one or more internet of things (IoT) devices, the user may be provided with an interactive scenario that simulates a home environment. In order to facilitate the user to operate outdoor equipment such as a barbecue grill, an interactive scene simulating an outdoor environment can be provided for the user. One or more interactive objects may be included in the interactive scene. In some embodiments, the interactive scenes and/or interactive objects may be virtual scenes and/or objects, e.g., animated scenes and objects. In other embodiments, the interactive scene may be a real scene and the interactive objects may be virtual AR objects displayed on the real scene.
In step S120, a voice instruction operation of the user on the interactive object is received. In the presented interactive scene, the user can perform voice operation on the interactive objects included in the interactive scene, and the operation is received by the terminal equipment. In other embodiments, the user may also perform a touch input operation through the touch screen, perform a click operation with the mouse, and even perform a similar operation with the gamepad.
In step S130, the operation result of the operation is presented in the interactive scene. In some embodiments, the terminal device may generate and present a corresponding operation response in the interactive scene based directly on the obtained user operation. In other embodiments, the terminal device may be networked with the server, upload the obtained user operation (or the partially processed user operation), obtain an operation response or intermediate data sent by the server, and present the operation response in the interactive scene based on the obtained operation response.
Here, receiving the user operation on the interactive object may include: and receiving the operation that the user triggers the interactive object to contain the function. Accordingly, presenting the operation result of the operation in the interaction scenario may include: presenting a triggering result of the function in the interaction scenario. For example, the user may perform an operation of opening a certain virtual interactive object, and then an effect after the virtual object is opened may be presented in the interactive scene. Subsequently, the user may close the virtual interactive object and present a corresponding closing effect in the interactive scene. Therefore, the interactive object can be displayed in an interactive mode by providing a specific interactive scene and enabling the user to know the corresponding operation effect aiming at the interactive object through active operation in the interactive scene.
In the invention, the operation of the user on the trigger object can simulate the operation on the real object through various ways. For example, a user may simulate a scene in which a real button of a physical object is clicked in a real scene by clicking on a virtual button of a virtual object in the virtual scene. However, the above-described simulation operation is always different from the operation in the real scene (for example, the tactile sensation of clicking the touch screen is definitely different from the tactile sensation of turning on a switch button of a real device). For this reason, the invention is particularly suitable for receiving the voice instruction operation of the user on the interactive object. In other words, in the case of receiving a voice instruction, it is possible to completely simulate a voice operation of a real scene. More specifically, step S120 may include: and receiving voice instruction operation of a user on the voice interaction object, wherein the voice instruction operation enables the voice interaction object to control other interaction objects to enable corresponding functions. At this time, the interactive scene includes the voice interactive object and other interactive objects controlled by the voice of the voice interactive object. Thus, the user can control other objects via the voice interaction object. For example, a smart speaker and a smart television networked with the smart speaker are displayed in the virtual scene. The user can turn on the voice of the television by inputting "XXX (e.g., a smart speaker wakeup word), so that the originally turned off television in the virtual scene presents an on effect, so as to simulate the scene of controlling the smart television through the smart speaker in a real home scene. Therefore, the user can know how to use the intelligent loudspeaker box to control the intelligent household appliance and really feel the use effect of the intelligent loudspeaker box.
In practical operation, the generation of the response command may be performed locally at the terminal device. For example, when a user enters a locally recognizable simple sentence. But usually, the generation of the response instruction requires the participation of the server. At this time, the terminal interaction method may further include: uploading the voice instruction operation to a server; the generated voice instruction response result is acquired, and step S130 may include: and presenting the operation result of the operation based on the response result.
In addition, in order to facilitate user participation, the interaction method of the invention may further comprise presenting an operation prompt for the interaction object. Similarly, the presentation may be perceived by the user in other ways, such as sound, images, or vibrations. For example, when the user enters a virtual scene page, a voice prompt is given: "try to say 'XXX (e.g., smart speaker wake-up word), turn on light'".
Further, in the case of a voice operation, the interactive method of the present invention may further include giving a voice feedback of the voice instruction operation to the user. The voice feedback may be a multi-turn dialog feedback when additional information is needed, or may be a description after the operation is completed. For example, when the user says "XXX (e.g., smart speaker wakeup word), turn on light", if a plurality of smart lights (e.g., one in living room, one in restaurant) are included in the virtual scene, all lights may be turned on in response to the user instruction, and the user may be further asked "which light to turn on", and the user may answer "light to turn on restaurant". At this time, the light of the restaurant may be turned on in the virtual scene, and a voice explanation "the light of the restaurant is turned on" is made.
The interaction method can well present the functions of the interaction objects and provide the users with the sense of participation, so the method is particularly suitable for subsequent popularization aiming at the interaction objects. At this time, the terminal interaction method may further include: and jumping to a promotion page of the operated interactive object. In different embodiments, the jump may be based on an active operation of the user, may be triggered by the user exiting the interactive scene, or may be triggered in other manners. For example, when a user controls a smart television and a smart light using a smart sound box in a virtual scene, then a product link including the smart sound box, the smart television, and the smart light may be displayed to the user when the user exits the virtual scene. Alternatively or additionally, the user may jump to a shopping link of the real product corresponding to the object directly in the virtual scene by, for example, clicking or double-clicking the corresponding interactive object, or directly add the product to a shopping cart.
To further increase user engagement and flexibility, in some implementations, the interactive scenes presented, as well as the interactive objects contained within the scenes, may also be changed or updated with various settings.
An interaction scene may be generated for presentation based on the user information. The user information includes at least one of: user portrait information; user shopping information; historical operation information of the user; selecting information of the interactive scene by the user; and interactive scene information input by the user.
Specifically, the interaction scene or interaction object preferred by the user and the subsequent promotion information can be presented through the previous information and the preference information of the user. For example, if the user portrays information indicating that the user is a favorite basketball game 90, a new pair of basketball shoes may be placed in the interactive scene to facilitate subsequent operations by the user. If the shopping information shows that the user has purchased the intelligent sound box, only the intelligent equipment which is not purchased by other users can be promoted in the subsequent promotion link. In addition, under the support of the server, for example, if the server can provide interactive scenes of various house types, the user can input interested house type information by himself, so as to generate a virtual scene consistent with the house type (even color matching), for example, so that the subsequent operation is more substituted.
After the interactive scene is determined and presented, the interactive objects in the interactive scene can be updated based on the triggering condition. The trigger may be a user trigger, a terminal/server setup trigger, or a time trigger. For example, if the user feels that the brightness is not sufficient after the smart lamp is turned on in the virtual scene, or the style is disliked, the interactive object displayed in the virtual scene may be replaced by, for example, changing to a brighter lamp through voice. In addition, an interactive object library can be provided for the user, so that the user can add or delete objects to or from the interactive scene through operation in the object library after the interactive scene is initially displayed. The server can also actively push the preferential merchandise, for example, at the beginning of the preferential event.
As indicated previously, presenting an interaction scenario may include, for example, displaying a virtual scenario page in a cell phone APP, the virtual scenario page including one or more virtual interaction objects. The virtual interactive object may be a virtual object corresponding to an actual commodity, and presenting the operation result of the operation in the interactive scene may include: and demonstrating the functions of the actual commodities in the interactive scene. In other embodiments, the virtual interaction object may also be a virtual object corresponding to a real service, and the functionality of the real service may then be demonstrated. For example, a virtual tablet in a virtual scene may display an "online classroom" service, and a user may experience by clicking on the "online classroom" service.
In other embodiments, presenting the interactive scene may include, for example, photographing an actual scene, e.g., an actual scene in one's own home, via an AR helmet or a cell phone, and adding a virtual interactive object thereto according to the photographed and displayed actual scene. At this time, the user can see the use effect of the virtual object in the real scene through the operation on the virtual object, so that the substitution feeling and the interestingness are further increased. For example, a user is shopping for a piece of bedroom lighting. The user can open the corresponding APP on the mobile phone and click the AR shopping function, so that the mobile phone camera is started and the bedroom picture is shot. APP discerns the bedroom scene and discovers to be equipped with an intelligent audio amplifier in the bedroom, can obtain the intelligent audio amplifier model that this user registered through the server this moment to under the condition that the user said "I will buy the lamp", for user propelling movement intelligent electric light. The user can select the installation part of the virtual intelligent electric lamp in the AR scene, and the virtual lighting effect of the electric lamp in the real scene is checked through the lamp-on instruction, so that whether the electric lamp needs to be purchased or not is determined.
Although the terminal interaction scheme of the present invention may be implemented locally in some cases, it typically requires server-side involvement. Therefore, the invention can also be realized as a terminal interaction system. Fig. 2 shows an example of a terminal interaction system capable of implementing the present invention. As shown, the server 210 is capable of connecting and serving a plurality of terminals 220, and is configured to: issuing an interactive scene, wherein the interactive scene comprises one or more interactive objects; acquiring the operation of the uploaded user on the interactive object; and generating and issuing an operation feedback instruction. A plurality of terminals 220 are in communication with the server and are configured to: receiving and presenting the sent interactive scene; receiving and uploading the operation of the user on the interactive object; receiving the issued operation response instruction; and presenting an operation result of the operation in the interaction scene based on the operation response instruction.
In particular, in case the terminal has voice interaction capability, the terminal 220 may be configured to: receiving voice instruction operation of a user on an interactive object, wherein the voice instruction operation is used for triggering the interactive object to comprise a function; and demonstrating a triggering result of the function in the interaction scene based on the operation feedback instruction. Accordingly, the server 210 may be configured to: recognizing the voice content of the voice instruction operation; and transmitting the voice content text to the terminal. Further, the server 210 may be configured to: and generating a response action of the interactive object as the operation response instruction based on the recognized voice content.
The one or more interaction objects may correspond to one or more internet of things devices, and the server 210 may be configured to: and pushing promotion information of the operated interactive object to the terminal 220. An example of the server participating in the terminal interaction of the present invention will be described below with reference to fig. 3.
To better describe the flow, take a promotion (advertisement) scenario as an example. For example, an intelligent home promotion page is set in the internet of things device management APP and used for introducing intelligent sound boxes of different styles, internet of things devices and using modes of the intelligent sound boxes and the internet of things devices.
The user may pre-install the APP or obtain an update with the smart advertisement functionality. Then, the user opens the APP and enters a promotion page ("smart advertisement" in the figure), and the user sees a virtual smart home scene. For example, an intelligent sound box, an intelligent electric lamp, an intelligent air conditioner and the like are arranged in the living room. The user may see some prompts on how to interact with the smart speaker. Subsequently, the user may make speech inputs, such as speaking to APP: "XXX, light on". After the APP receives the voice, the voice is uploaded to the voice interaction intelligent cloud end through the voice interaction SDK. The intelligent cloud identifies the voice command and issues the voice command to the APP to present the voice text. Meanwhile, after the voice input reaches the cloud end, the server end issues the response action to the APP through the cloud intelligent algorithm, and after the APP receives an issued command, the APP refreshes the local information of the advertisement through the end intelligent algorithm and lights 'lamps in the living room' in the scene. When the virtual electric light is lighted, the APP can realize intelligent advertisement interaction in a voice playing mode, for example, "good, the light is lighted". Through the scene and the operation, the user can clearly know how to control the IoT product through the intelligent loudspeaker box, and the IoT product is interesting. Not only is the promotion information revealed, but also the real-time participation of the user is realized. Further, the user can interact with the intelligent advertisement through voice interaction to obtain the content information required by the user, for example, the voice input of "buy where" or "change a brighter lamp" and the like.
Further, the present invention can also be implemented as a terminal display method. Fig. 4 is a flowchart illustrating a terminal display method according to an embodiment of the present invention. The method is particularly suitable for being implemented on a mobile intelligent terminal such as a smart phone or a tablet computer.
In step S410, a virtual interactive scene page is displayed, where the virtual scene page includes one or more virtual interactive objects. Specifically, the user may open the corresponding APP, click or select the corresponding scene to implement the display of the virtual interactive scene page. Subsequently, in step S420, in response to an operation of a virtual interactive object by a user, an operation result of the operation is displayed in the virtual scene page.
In one embodiment, step S420 may include: and in response to the voice instruction operation of a user triggering the interactive object containing the function, displaying the interactive object triggering the function in the interactive scene. For example, a television picture of a turned-on television may be displayed in a virtual scene. Accordingly, the page portion of the virtual interaction scene page containing the corresponding virtual interaction object can be refreshed.
Further, step S420 may include: and displaying the effect of the interactive object after the interactive object triggers the function in the interactive scene. The above effects may be spread to other parts than the interacting object, e.g. lighting effects after switching on the lights may illuminate the entire virtual scene. Accordingly, the effect after the function is triggered can be rendered in the virtual scene page.
The present invention can also be implemented as a terminal device. The terminal device may be adapted to perform the interaction method as described above and may be part of the interaction system as described above. Fig. 5 is a schematic structural diagram of a terminal device that can be used to implement the above-described interaction method according to an embodiment of the present invention.
Referring to fig. 5, the terminal device 500 includes an output means 510, an input means 520, and a processing means 530.
The output device 510 is used for presenting an interaction scene, which includes one or more interaction objects. The input device 520 is used for receiving the operation of the user on the interactive object. The processing device 530 is configured to use the presenting device to present the operation result of the operation in the interaction scene based on the operation.
The output means 510 may include: a display for displaying the interactive scene. In one embodiment, the display may be a touch screen and serves as an input device for receiving a user's touch input operation on an interactive object.
Alternatively or additionally, the input device 520 may include: and the microphone is used for receiving the voice instruction operation of the user on the interactive object. Accordingly, the output device 510 at this time may further include: and the voice input device is used for giving voice feedback of the voice instruction operation and/or giving a voice operation prompt aiming at the interactive object to the user.
Further, the terminal device 500 may comprise networking means for communicating with the server. In particular, the networking device may be configured to: acquiring the interactive scene; reporting the operation of the user on the interactive object; and acquiring an operation response instruction of the operation, wherein the response instruction is used by the processing device for presenting an operation result of the operation in the interaction scene.
Specifically, the processing device 530 may be a multi-core processor, or may include a plurality of processors. In some embodiments, processor 530 may include a general-purpose host processor and one or more special coprocessors such as a Graphics Processor (GPU), a Digital Signal Processor (DSP), or the like. In some embodiments, processor 530 may be implemented using custom circuitry, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
The terminal device 500 may further include a memory to store the device management APP as above and other data content. The memory may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for the processor 820 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, the memory may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-dense optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory has stored thereon executable code that, when processed by the processor 530, causes the processor 530 to perform the interactive access methods described above.
The terminal interaction scheme according to the present invention has been described in detail above with reference to the accompanying drawings, and by providing an interaction scenario for a user, receiving a user operation and presenting an operation result, the user can participate in the promotion process itself, and various functions of an operation object can be accurately known. The method is particularly suitable for being implemented as an intelligent APP-end voice interaction popularization scheme, namely, the APP-end user can carry out voice interaction in a popularization scene, and the popularization scene presents different content information according to the voice indication of the user, so that the service or product to be popularized is displayed, the participation of the user, the interestingness of information popularization and the putting accuracy are improved.
As described above, the present invention can examine its corresponding function by performing a voice operation on an interactive object (e.g., a certain good or a certain service) in an interactive scene. However, in a broader embodiment, the interactive object may be the commercial promotional content itself (e.g., a commercial, rather than an advertised item).
Therefore, the invention can also be realized as a terminal voice interaction method, which comprises the following steps: presenting commercial promotion content based on a predetermined condition in a current interaction scenario; receiving voice instruction operation of a user on the commercial promotion content; and presenting operation results of the operation on the commercial promotion content in the interaction scene.
The current interactive scene may be, for example, a video playing scene, and the predetermined condition for triggering the presentation of the commercial promotion content may include a start operation of video playing, a pause operation of video playing, and the like. In other embodiments, voice instructions for the end user may also be included.
In addition, the current interactive scene can also be an opening scene of the APP, so that the APP starting advertisement is realized as the voice interactive advertisement.
In some embodiments, predetermined commercial offers may be generated. For example, all users of an APP are presented with the same commercial advertisement when the APP is turned on, and the advertisement may operate in the same manner. In a preferred embodiment, the promotion object and the voice operation mode of the commercial promotion content can be generated or selected instantly for a specific user. Specifically, may be based on end user portrait information; end user shopping information; end user historical operation information; selecting information of the interactive scene by the terminal user; and generating a promotion object and/or a voice operation mode of the commercial promotion content by at least one item of the interactive scene information input by the terminal user.
For example, a new interactive game may be recommended for the user based on the user's game preferences and the interaction patterns of different difficulty may be presented according to the user's player level. For another example, if the user clicks on a dish making video, a cooking tool may be recommended to the user, and a different voice cooking manner or the like is given based on the user's previous online shopping list.
Further, the voice command operations described above may be used to operate on the presentation of the commercial promotion itself, for example, to control the playing, pausing, fast-forwarding of commercials. Preferably, the voice instruction operation may be further used to operate a promotion object included in the commercial promotion content. For example, the smart speaker presented in the smart speaker advertisement is operated as described above.
The invention can also be used for commercial promotion in movie and television screenplay. To this end, the present invention can also be implemented as a voice interaction method, including: playing a movie and television play, wherein interactive objects are presented in playing scenes of the movie and television play; receiving voice instruction operation of a user on an interactive object; and presenting the operation result of the operation in the playing scene of the movie and television play.
For example, in an indoor scene of a movie, a new bluetooth lighting system is shown. At this time, a prompt that an interactive object exists in the playing scene of the movie and television play may be given, for example, a certain device in the scene is lighted up, or a pop-up box, a pop-up screen prompt, or the like is given, so that the user knows that an interactive object exists in the current scene. At this point, the user may perform a corresponding operation on the interactable object, for example, to change the lighting system in the current scene from an evening mode to a night mode. Then, the effect result of the interactable object on the playing scene after the corresponding function is enabled based on the voice instruction operation can be presented in the playing scene of the movie and television play. For example, the movie and television drama is changed to continue the drama development under the illumination of the night mode.
The invention can also be used for commercial promotional screens. The commercial promotion screen may be an advertising screen that faces unspecified users, such as an outdoor advertising screen, an elevator advertising screen, and an advertising screen on a transportation means (e.g., a taxi, a high-speed rail, an interactive screen installed behind a front seat of an airplane).
Therefore, the business promotion screen voice interaction method comprises the following steps: prompting a user to perform voice interaction; acquiring voice interaction input of the user; presenting the commercial promotion content; receiving voice instruction operation of a user on the commercial promotion content; and presenting an operation result of the voice instruction operation on the commercial promotion content.
The commercial promotion screen can keep displaying a prompt of 'can interact with me voice' when the commercial promotion screen is idle, so that a user can conveniently interact with the commercial promotion screen by himself. In some embodiments, the commercial promotion screen may sense the proximity of a user, for example, based on an image sensor or a pressure sensor. At this point, the commercial promotion screen may be illuminated and a prompt (e.g., voice-based or text display) may be given that it is capable of voice interaction.
The user may interact with the commercial promotion screen based on the prompt. Similarly, the promotional content in the commercial promotional screen can be generated by the unrelated user, such as by bid placement or by time slot. In other embodiments, user information of the user may also be obtained; and presenting the commercial promotion content based on the user information. Here, the user information includes at least one of: obtaining the biological information of the user on site; voice interaction input of the user is acquired on site; and user information obtained through networking based on the user identity information.
For example, promotional content having a corresponding age bracket may be selected from a database based on recognition of the gender and age of the user (e.g., face or voice recognition). As another example, an advertisement may be given concerning content based on content information obtained in the user voice interaction, e.g., "i are now tired", etc. In addition, the user identity information can be acquired through auxiliary means such as face recognition and two-dimensional code scanning, for example, the user ID is matched, and the user information stored based on the user ID is associated, so that popularization content and voice interaction forms based on preference, historical operation or shopping habits can be selected.
In order to enhance understanding of the present invention, an example of an interaction scenario based on the present invention will be described below with reference to fig. 6A-6B. Fig. 6A-6B illustrate an example of a terminal interaction scenario in accordance with the present invention.
For example, a smart sound box APP installed in a smart phone or a link of "experiencing smart home" may be pushed to a user at a significant position in another APP, and the user may then enter a virtual scene page of the smart home by clicking the link, as shown in fig. 6A. Here, the presentation manner of the smart advertisement may be a Native page, an H5 page, an applet, a mini-game, and the like, which is not limited by the present invention.
Fig. 6A shows a virtual smart home scenario. On the tea table in living room, arranged intelligent audio amplifier 1 to still include smart television 2, robot 3, intelligent (window) curtain 4 and intelligent electric light 5 these thing networking devices of sweeping the floor in this living room. In a particular implementation, these devices may inform the user of the operability by blinking.
Specifically, the smart speaker 1 may notify the user of "welcome to enter the virtual home, please try to wake up XXX (name and wake-up word of the smart speaker)" by flashing itself, or by a text prompt box or voice prompt beside the smart speaker, or even by a virtual avatar capable of speaking. The user may then wake up the smart sound box 1 by "hello, XXX".
The smart speaker 1 can answer "the owner is good at night and needs to do what for you" at this time, and then the user can answer "draw the curtain". At this time, the terminal may upload the voice input to the server, and obtain a response instruction of the server, so as to display an animation that the curtain is pulled open, and enable the smart speaker 1 to answer "the curtain is pulled open".
Similarly, the user may perform further interactions with the smart sound box 1, for example, turn on the lamp 5, turn on the tv 2, turn on the sweeping robot 3, and get the scene triggered by the interactive object function as shown in fig. 6B. Subsequently, the user can also manipulate the interactive object through a closing instruction or other instructions. Therefore, the user can clearly experience the voice operation effect of the smart home.
The user can jump to the purchase page of the interactive object or put the interactive object into a shopping cart by clicking the object or inputting the interactive object by voice in the virtual scene. A purchase link list including the above-described devices may also be displayed for the user after the user exits the page, thereby facilitating subsequent purchases by the user.
By the invention, the user can perform voice interaction with the advertisement in the APP terminal. Through bidirectional communication with the intelligent advertisement, the advertisement can present different content information according to the voice indication of the user, so that the advertisement becomes intelligent and more humanized. In addition, the advertisement resource position of the APP terminal is limited, and a user can perform voice interaction with the advertisement, so that the content information presented by the advertisement is the information concerned by the user to the maximum extent.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention.
Alternatively, the present invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the present invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:音频播放方法、装置、设备及存储介质