Intelligent digital assistant in a multitasking environment

文档序号:7281 发布日期:2021-09-17 浏览:27次 中文

1. A method for providing digital assistant services, the method comprising:

at a user device having one or more processors and memory:

receiving a voice input for managing a system configuration of the user device;

identifying a current state of the system configuration;

determining a user intent based on the speech input and the current state of the system configuration;

determining whether the user intent indicates a request for information related to the system configuration of the user device or a request to perform a task related to the system configuration of the user device;

in accordance with a determination that the user intent indicates an information request related to the system configuration of the user device, providing a spoken response to the information request that includes the current state of the system configuration of the user device; and

in accordance with a determination that the user intent indicates the request to perform the task, instantiating a process to modify the system configuration from the current state to a desired state determined from the voice input.

2. The method of claim 1, further comprising:

instantiating a digital assistant service in response to receiving the predetermined phrase.

3. The method of claim 1, wherein the system configuration of the user device comprises an audio configuration.

4. The method of claim 1, wherein the system configuration of the user equipment comprises a date and time configuration.

5. The method of claim 1, wherein the system configuration of the user equipment comprises a spoken configuration.

6. The method of claim 1, wherein the system configuration of the user device comprises a display configuration.

7. The method of claim 1, wherein the system configuration of the user device comprises an input device configuration.

8. The method of claim 1, wherein the system configuration of the user equipment comprises a network configuration.

9. The method of claim 1, wherein the system configuration of the user equipment comprises a notification configuration.

10. The method of claim 1, wherein the system configuration of the user equipment comprises a security configuration.

11. The method of claim 1, wherein the system configuration of the user device comprises a backup configuration.

12. The method of claim 1, wherein the system configuration of the user equipment comprises an application configuration.

13. The method of claim 1, wherein the system configuration of the user device comprises a user interface configuration.

14. The method of claim 1, wherein determining the user intent comprises:

determining an executable intention; and

determining a parameter associated with the actionable intent.

15. The method of claim 1, wherein the user intent is determined based on contextual information.

16. The method of claim 15, wherein the contextual information comprises at least one of user-specific data, device configuration data, and sensor data.

17. The method of claim 1, wherein determining whether the user intent is indicative of a request for information related to the system configuration of the user device or a request to perform a task related to the system configuration of the user device further comprises determining whether the user intent is to change the system configuration.

18. The method of claim 1, wherein the spoken response is a first spoken response, further comprising providing a second spoken response based on a result of performing the task.

19. The method of claim 1, wherein instantiating the process to modify the system configuration comprises providing a user interface that enables the user to perform the task.

20. The method of claim 19, wherein the user interface includes a link that enables the user to perform the task.

21. A computer readable storage medium storing one or more programs configured for execution by one or more processors of a user device, the one or more programs comprising instructions for:

receiving a voice input for managing a system configuration of the user device;

identifying a current state of the system configuration;

determining a user intent based on the speech input and the current state of the system configuration;

determining whether the user intent indicates a request for information related to the system configuration of the user device or a request to perform a task related to the system configuration of the user device;

in accordance with a determination that the user intent indicates an information request related to the system configuration of the user device, providing a spoken response to the information request that includes the current state of the system configuration of the user device; and

in accordance with a determination that the user intent indicates the request to perform the task, instantiating a process to modify the system configuration from the current state to a desired state determined from the voice input.

22. The computer readable storage medium of claim 21, the one or more programs further comprising instructions for:

instantiating a digital assistant service in response to receiving the predetermined phrase.

23. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises an audio configuration.

24. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises a date and time configuration.

25. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises a spoken configuration.

26. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises a display configuration.

27. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises an input device configuration.

28. The computer-readable storage medium of claim 21, wherein the system configuration of the user equipment comprises a network configuration.

29. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises a notification configuration.

30. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises a security configuration.

31. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises a backup configuration.

32. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises an application configuration.

33. The computer-readable storage medium of claim 21, wherein the system configuration of the user device comprises a user interface configuration.

34. The computer-readable storage medium of claim 21, wherein determining the user intent comprises:

determining an executable intention; and

determining a parameter associated with the actionable intent.

35. The computer-readable storage medium of claim 21, wherein the user intent is determined based on contextual information.

36. The computer-readable storage medium of claim 35, wherein the contextual information comprises at least one of user-specific data, device configuration data, and sensor data.

37. The computer-readable storage medium of claim 21, wherein determining whether the user intent is to indicate a request for information related to the system configuration of the user device or a request to perform a task related to the system configuration of the user device further comprises determining whether the user intent is to change the system configuration.

38. The computer-readable storage medium of claim 21, wherein the spoken response is a first spoken response, further comprising providing a second spoken response based on a result of performing the task.

39. The computer-readable storage medium of claim 21, wherein instantiating the process to modify the system configuration includes providing a user interface that enables the user to perform the task.

40. The computer-readable storage medium of claim 39, wherein the user interface includes a link that enables the user to perform the task.

41. An electronic device, comprising:

one or more processors;

a memory; and

one or more programs stored in the memory, the one or more programs including instructions for:

receiving a voice input for managing a system configuration of the user device;

identifying a current state of the system configuration;

determining a user intent based on the speech input and the current state of the system configuration;

determining whether the user intent indicates a request for information related to the system configuration of the user device or a request to perform a task related to the system configuration of the user device;

In accordance with a determination that the user intent indicates an information request related to the system configuration of the user device, providing a spoken response to the information request that includes the current state of the system configuration of the user device; and

in accordance with a determination that the user intent indicates the request to perform the task, instantiating a process to modify the system configuration from the current state to a desired state determined from the voice input.

42. The electronic device of claim 41, the one or more programs further comprising instructions for:

instantiating a digital assistant service in response to receiving the predetermined phrase.

43. The electronic device of claim 41, wherein the system configuration of the user device comprises an audio configuration.

44. The electronic device of claim 41, wherein the system configuration of the user device comprises a date and time configuration.

45. The electronic device of claim 41, wherein the system configuration of the user device comprises a spoken configuration.

46. The electronic device of claim 41, wherein the system configuration of the user device comprises a display configuration.

47. The electronic device of claim 41, wherein the system configuration of the user device comprises an input device configuration.

48. The electronic device of claim 41, wherein the system configuration of the user device comprises a network configuration.

49. The electronic device of claim 41, wherein the system configuration of the user device comprises a notification configuration.

50. The electronic device of claim 41, wherein the system configuration of the user device comprises a security configuration.

51. The electronic device of claim 41, wherein the system configuration of the user device comprises a backup configuration.

52. The electronic device of claim 41, wherein the system configuration of the user device comprises an application configuration.

53. The electronic device of claim 41, wherein the system configuration of the user device comprises a user interface configuration.

54. The electronic device of claim 41, wherein determining the user intent comprises:

determining an executable intention; and

determining a parameter associated with the actionable intent.

55. The electronic device of claim 41, wherein the user intent is determined based on contextual information.

56. The electronic device of claim 55, wherein the contextual information includes at least one of user-specific data, device configuration data, and sensor data.

57. The electronic device of claim 41, wherein determining whether the user intent is indicative of a request for information related to the system configuration of the user device or a request to perform a task related to the system configuration of the user device further comprises determining whether the user intent is to change the system configuration.

58. The electronic device of claim 41, wherein the spoken response is a first spoken response, further comprising providing a second spoken response based on a result of performing the task.

59. The electronic device of claim 41, wherein instantiating the process to modify the system configuration includes providing a user interface that enables the user to perform the task.

60. The electronic device of claim 59, wherein the user interface includes a link that enables the user to perform the task.

Background

Digital assistants are becoming increasingly popular. In a desktop or tablet environment, users often perform a number of tasks, including searching for files or information, managing files or folders, playing movies or songs, editing documents, adjusting system configuration, sending emails, and so forth. It is often cumbersome and inconvenient for a user to perform multiple tasks manually in parallel and to often switch between tasks. Accordingly, it is desirable for a digital assistant to be able to assist a user in performing some tasks in a multitasking environment based on the user's speech input.

Disclosure of Invention

Some prior art techniques for assisting a user in performing tasks in a multitasking environment may include, for example, dictation. Often, a user may need to manually perform many other tasks in a multitasking environment. As one example, a user may have made a presentation on their desktop yesterday and may wish to continue making the presentation. Users typically need to manually locate a presentation on their desktop computer, open the presentation, and then continue editing the presentation.

As another example, a user may always book a flight on their smart phone when the user is away from their desktop computer. When a desktop computer is available, the user may wish to continue booking flights. In the prior art, the user would need to first launch the Web browser and then restart the flight booking process on the user's desktop computer. In other words, the flight booking process that the user previously made on the smartphone may not continue on the user's desktop computer.

As another example, a user may edit a document on their desktop computer and wish to change the system configuration, such as changing the brightness level of the screen, opening a Bluetooth connection, and so forth. In the prior art, the user may need to stop editing the document, find and launch the brightness configuration application, and manually change the settings. In a multitasking environment, some prior art techniques are unable to perform the tasks described in the above examples based on the user's speech input. It would therefore be desirable and advantageous to provide a voice-enabled digital assistant in a multitasking environment.

The present invention provides systems and processes for operating a digital assistant. According to one or more examples, a method includes receiving, at a user device having one or more processors and memory, a first speech input from a user. The method also includes identifying contextual information associated with the user device, and determining a user intent based on the first speech input and the contextual information. The method also includes determining whether the user intends to perform a task using a search process or an object management process. The search process is configured to search for data stored inside or outside the user device, and the object management process is configured to manage objects associated with the user device. The method also includes, in accordance with a determination that the user intent is to perform the task using a search process, performing the task using the search process. The method also includes, in accordance with a determination that the user intent is to perform the task using the object management process, performing the task using the object management process.

According to one or more examples, a method includes receiving, at a user device having one or more processors and memory, voice input from a user to perform a task. The method also includes identifying contextual information associated with the user device and determining the user intent based on the speech input and the contextual information associated with the user device. The method also includes determining, based on the user intent, whether the task is to be performed at the user device or at a first electronic device communicatively connected to the user device. The method also includes, in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving content for performing the task. The method also includes, in accordance with a determination that the task is to be performed at the first electronic device and that content for performing the task is remotely located to the first electronic device, providing content for performing the task to the first electronic device.

According to one or more examples, a method includes receiving, at a user device having memory and one or more processors, voice input from a user to manage one or more system configurations of the user device. The user device is configured to provide multiple user interfaces simultaneously. The method also includes identifying contextual information associated with the user device, and determining the user intent based on the speech input and the contextual information. The method also includes determining whether the user intent indicates a request for information or a request to perform a task. The method also includes, in accordance with a determination that the user intent indicates an information request, providing a spoken response to the information request. The method also includes, in accordance with a determination that the user intent indicates a request to perform the task, instantiating a process associated with the user device to perform the task.

Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors. Executable instructions for performing these functions are optionally included in a transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.

Drawings

For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings in which like reference numerals indicate corresponding parts throughout the figures.

Fig. 1 is a block diagram illustrating a system and environment for implementing a digital assistant in accordance with various examples.

Figure 2A is a block diagram illustrating a portable multifunction device implementing a client-side portion of a digital assistant, according to some embodiments.

Fig. 2B is a block diagram illustrating exemplary components for event processing according to various examples.

Fig. 3 illustrates a portable multifunction device implementing a client-side portion of a digital assistant, in accordance with various examples.

Fig. 4 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface, in accordance with various examples.

Fig. 5A illustrates an exemplary user interface for a menu of applications at a portable multifunction device, according to various examples.

FIG. 5B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display, in accordance with various examples.

Fig. 6A illustrates a personal electronic device, according to various examples.

Fig. 6B is a block diagram illustrating a personal electronic device, according to various examples.

Fig. 7A is a block diagram illustrating a digital assistant system or server portion thereof according to various examples.

Fig. 7B illustrates functionality of the digital assistant illustrated in fig. 7A according to various examples.

Fig. 7C illustrates a portion of an ontology according to various examples.

Fig. 8A-8F illustrate functionality for performing tasks using a search process or an object management process by a digital assistant, according to various examples.

Fig. 9A-9H illustrate functionality for performing tasks using a search process by a digital assistant, according to various examples.

10A-10B illustrate functionality for performing tasks using an object management process by a digital assistant, according to various examples.

11A-11D illustrate functionality for performing tasks using a search process with a digital assistant according to various examples.

12A-12D illustrate functionality for performing tasks by a digital assistant using a search process or an object management process, according to various examples.

Fig. 13A-13C illustrate functionality for performing tasks using an object management process by a digital assistant, according to various examples.

14A-14D illustrate functionality for performing tasks at a user device using remotely located content via a digital assistant, according to various examples.

15A-15D illustrate functionality for performing a task at a first electronic device using remotely located content with a digital assistant, according to various examples.

16A-16C illustrate functionality for performing a task at a first electronic device using remotely located content with a digital assistant, according to various examples.

17A-17E illustrate functionality for performing tasks at a user device using remotely located content via a digital assistant, according to various examples.

18A-18F illustrate functionality for providing system configuration information by a digital assistant in response to a user's request for information, according to various examples.

19A-19D illustrate functionality for performing tasks by a digital assistant in response to a user request according to various examples.

20A-20G illustrate flow diagrams of exemplary processes for operating a digital assistant, according to various examples.

21A-21E illustrate flow diagrams of exemplary processes for operating a digital assistant, according to various examples.

Fig. 22A-22D illustrate flow diagrams of exemplary processes for operating a digital assistant, according to various examples.

Fig. 23 shows a block diagram of an electronic device, according to various examples.

Detailed Description

In the following description of the present disclosure and embodiments, reference is made to the accompanying drawings, in which are shown by way of illustration specific embodiments that may be practiced. It is to be understood that other embodiments and examples may be practiced and that changes may be made without departing from the scope of the present disclosure.

Techniques for providing a digital assistant in a multitasking environment are desirable. As described herein, techniques for providing a digital assistant in a multitasking environment are desirable for various purposes, such as reducing the hassle of searching for objects or information, enabling efficient object management, maintaining continuity between tasks performed at a user device and another electronic device, and reducing the effort of a user manually adjusting system configuration. Such techniques are advantageous by allowing a user to operate a digital assistant using speech input in a multitasking environment in order to perform various tasks. Moreover, such techniques alleviate the hassle or inconvenience associated with performing various tasks in a multitasking environment. Further, by allowing the user to perform tasks using speech, they can keep both hands from leaving the keyboard or mouse while performing tasks that require context switching — allowing the digital assistant to perform tasks as efficiently as the user's "third hand". It should be appreciated that by allowing a user to perform tasks using speech, it allows the user to more efficiently complete tasks that may require multiple interactions with multiple applications. For example, searching images and sending them to someone by email may require opening a search interface, entering search criteria, selecting one or more results, opening an email for composition, copying or moving the resulting document to the open email, filling in an email address and sending the email. Such tasks can be done more efficiently through voice commands such as "find pictures from date x and send them to my wife". Requests for moving documents, searching for information on the internet, composing messages, and the like may all be accomplished more efficiently using voice, while allowing the user to perform other tasks using both hands.

Although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first store can be referred to as a second store, and similarly, a second store can be referred to as a first store, without departing from the scope of various described examples. The first storage and the second storage may both be storages, and in some cases, may be separate and distinct storages.

The terminology used in the description of the various described examples herein is for the purpose of describing particular examples only and is not intended to be limiting. As used in the description of the various described examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

Depending on the context, the term "if" may be interpreted to mean "when. Similarly, the phrase "if it is determined.. -." or "if [ the stated condition or event ] is detected" may be interpreted to mean "upon determining. -. or" in response to determining. -. or "upon detecting [ the stated condition or event ] or" in response to detecting [ the stated condition or event ] ", depending on the context.

1. System and environment

Fig. 1 illustrates a block diagram of a system 100 according to various examples. In some examples, system 100 may implement a digital assistant. The terms "digital assistant," "virtual assistant," "intelligent automated assistant," or "automatic digital assistant" may refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent and perform actions based on the inferred user intent. For example, to follow the inferred user intent, the system may perform one or more of the following: identifying a task stream having steps and parameters designed to implement the inferred user intent, entering into the task stream specific requirements from the inferred user intent; executing the task flow by calling a program, a method, a service, an API, etc.; and to generate an output response to the user in audible (e.g., voice) and/or visual form.

In particular, the digital assistant may be capable of accepting user requests at least partially in the form of natural language commands, requests, statements, narratives and/or inquiries. Typically, the user request may seek an informational answer by the digital assistant or seek the digital assistant to perform a task. A satisfactory response to a user request may be to provide a requested informational answer, to perform a requested task, or a combination of both. For example, a user may present a question to the digital assistant, such as "where do i am present? "based on the user's current location, the digital assistant may answer" you are near siemens of the central park. A "user may also request to perform a task, such as" please invite my friend to join my girlfriend's birthday party on the next week. In response, the digital assistant can acknowledge the request by speaking "good, now" and then send an appropriate calendar invitation on behalf of the user to each of the user's friends listed in the user's electronic address book. During the performance of requested tasks, the digital assistant can sometimes interact with the user over a long period of time in a continuous conversation involving multiple exchanges of information. There are many other ways to interact with a digital assistant to request information or perform various tasks. In addition to providing verbal responses and taking programmed actions, the digital assistant can also provide responses in other visual or audio forms, such as text, prompts, music, video, animation, and so forth.

As shown in fig. 1, in some examples, the digital assistant may be implemented according to a client-server model. The digital assistant may include a client-side portion 102 (hereinafter "DA client 102") executing on a user device 104, and a server-side portion 106 (hereinafter "DA server 106") executing on a server system 108. The DA client 102 may communicate with the DA server 106 through one or more networks 110. The DA client 102 may provide client-side functionality, such as user-oriented input and output processing, as well as communicating with the DA server 106. The DA server 106 may provide server-side functionality for any number of DA clients 102, each of the number of DA clients 102 being located on a respective user device 104.

In some examples, DA server 106 may include a client-facing I/O interface 112, one or more processing modules 114, data and models 116, and an I/O interface 118 to external services. The client-facing I/O interface 112 may facilitate client-facing input and output processing for the DA server 106. The one or more processing modules 114 may utilize the data and models 116 to process speech input and determine the user's intent based on natural language input. Further, the one or more processing modules 114 perform task execution based on the inferred user intent. In some examples, DA server 106 may communicate with external services 120 over one or more networks 110 to complete tasks or obtain information. An I/O interface 118 to external services may facilitate such communication.

The user device 104 may be any suitable electronic device. For example, the user device may be a portable multifunction device (e.g., device 200 described below with reference to fig. 2A), a multifunction device (e.g., device 400 described below with reference to fig. 4), or a personal electronic device (e.g., device 600 described below with reference to fig. 6A-6B). Portable multifunctional equipmentFor example a mobile phone that also contains other functions such as PDA and/or music player functions. Specific examples of portable multifunction devices can include those from Apple IncDevice and iPodAn apparatus andan apparatus. Other examples of portable multifunction devices may include, but are not limited to, laptop computers or tablet computers. Further, in some examples, user device 104 may be a non-portable multifunction device. In particular, the user device 104 may be a desktop computer, a gaming console, a television, or a television set-top box. In some examples, the user device 104 may operate in a multitasking environment. The multitasking environment allows a user to operate the device 104 to perform multiple tasks in parallel. For example, the multitasking environment may be a desktop or laptop computer environment, where the device 104 may perform one task in response to user input received from a physical user interface device and simultaneously perform another task in response to a user's voice input. In some examples, user device 104 may include a touch-sensitive surface (e.g., a touch screen display and/or a trackpad). Further, the user device 104 may optionally include one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick. Various examples of electronic devices, such as multifunction devices, are described in more detail below.

Examples of one or more communication networks 110 may include a Local Area Network (LAN) and a Wide Area Network (WAN), such as the internet. The one or more communication networks 110 may be implemented using any known network protocol, including various wired or wireless protocols, such as, for example, Ethernet, Universal Serial Bus (USB), firewire, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wi-Fi, Voice over Internet protocol (VoIP), Wi-MAX, or any other suitable communication protocol.

The server system 108 may be implemented on one or more stand-alone data processing devices of a computer or a distributed network. In some examples, the server system 108 may also employ various virtual devices and/or services of a third party service provider (e.g., a third party cloud service provider) to provide potential computing resources and/or infrastructure resources of the server system 108.

In some examples, user device 104 may communicate with DA server 106 via second user device 122. The second user device 122 may be similar to or the same as the user device 104. For example, the second user equipment 122 may be similar to the devices 200, 400, or 600 described below with reference to fig. 2A, 4, and 6A-6B. The user device 104 may be configured to communicatively couple to the second user device 122 via a direct communication connection such as bluetooth, NFC, BTLE, etc., or via a wired or wireless network such as a local Wi-Fi network. In some examples, second user device 122 may be configured to act as a proxy between user device 104 and DA server 106. For example, DA client 102 of user device 104 may be configured to transmit information (e.g., a user request received at user device 104) to DA server 106 via second user device 122. DA server 106 may process the information and return relevant data (e.g., data content in response to the user request) to user device 104 via second user device 122.

In some examples, the user device 104 may be configured to communicate an abbreviated request for data to the second user device 122 to reduce the amount of information transmitted from the user device 104. Second user device 122 may be configured to determine supplemental information to add to the abbreviated request to generate a complete request for transmission to DA server 106. The system architecture may advantageously allow a user device 104 with limited communication capabilities and/or limited battery power (e.g., a watch or similar compact electronic device) to access services provided by the DA server 106 by using a second user device 122 with stronger communication capabilities and/or battery power (e.g., a mobile phone, laptop, tablet, etc.) as a proxy to the DA server 106. Although only two user devices 104 and 122 are shown in fig. 1, it should be understood that system 100 may include any number and type of user devices configured to communicate with DA server system 106 in this proxy configuration.

While the digital assistant shown in fig. 1 may include both a client-side portion (e.g., DA client 102) and a server-side portion (e.g., DA server 106), in some examples, the functionality of the digital assistant may be implemented as a standalone application installed at a user device. Moreover, the division of functionality between the client portion and the server portion of the digital assistant may vary in different implementations. For example, in some examples, the DA client may be a thin client that provides only user-oriented input and output processing functions and delegates all other functions of the digital assistant to a backend server.

2. Electronic device

Attention is now directed to embodiments of an electronic device for implementing a client-side portion of a digital assistant. FIG. 2A is a block diagram illustrating a portable multifunction device 200 with a touch-sensitive display system 212 in accordance with some embodiments. The touch sensitive display 212 is sometimes referred to as a "touch screen" for convenience, and is sometimes referred to or called a "touch sensitive display system". Device 200 includes memory 202 (which optionally includes one or more computer-readable storage media), memory controller 222, one or more processing units (CPUs) 220, peripherals interface 218, RF circuitry 208, audio circuitry 210, speaker 211, microphone 213, input/output (I/O) subsystem 206, other input control devices 216, and external ports 224. The device 200 optionally includes one or more optical sensors 264. Device 200 optionally includes one or more contact intensity sensors 265 for detecting the intensity of contacts on device 200 (e.g., a touch-sensitive surface, such as touch-sensitive display system 212 of device 200). Device 200 optionally includes one or more tactile output generators 267 for generating tactile outputs on device 200 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 212 of device 200 or trackpad 455 of device 400). These components optionally communicate over one or more communication buses or signal lines 203.

As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (surrogate) for the force or pressure of a contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., 256 less). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measure of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measure). In some implementations, the substitute measurement of contact force or pressure is converted into an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of the contact as an attribute of the user input allows the user to access additional device functionality that the user may not have access to on a smaller sized device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls, such as knobs or buttons).

As used in this specification and claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user with the user's sense of touch. For example, where the device or component of the device is in contact with a surface of the user that is sensitive to touch (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, a touch-sensitive surface (e.g., a touch-sensitive display or a trackpad) is optionally interpreted by a user as a "down click" or "up click" on a physical actuation button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. As another example, movement of the touch sensitive surface may optionally be interpreted or sensed by the user as "roughness" of the touch sensitive surface even when there is no change in the smoothness of the touch sensitive surface. While such interpretation of touch by a user will be limited by the user's individualized sensory perception, sensory perception of many touches is common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "up click," "down click," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate the described sensory perception of a typical (or ordinary) user.

It should be understood that device 200 is only one example of a portable multifunction device, and that device 200 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 2A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing circuits and/or application specific integrated circuits.

Memory 202 may include one or more computer-readable storage media. The computer-readable storage medium may be tangible and non-transitory. The memory 202 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 222 may control other components of device 200 to access memory 202.

In some examples, the non-transitory computer-readable storage medium of memory 202 may be used to store instructions (e.g., for performing aspects of process 1200 described below) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, the instructions (e.g., for performing aspects of the process 1200 described below) may be stored on a non-transitory computer-readable storage medium (not shown) of the server system 108 or may be divided between the non-transitory computer-readable storage medium of the memory 202 and the non-transitory computer-readable storage medium of the server system 108. In the context of this document, a "non-transitory computer-readable storage medium" can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device.

Peripheral interface 218 may be used to couple the input and output peripherals of the device to CPU 220 and memory 202. The one or more processors 220 execute or execute various software programs and/or sets of instructions stored in the memory 202 to perform various functions of the device 200 and to process data. In some embodiments, peripherals interface 218, CPU 220, and memory controller 222 may be implemented on a single chip, such as chip 204. In some other embodiments, they may be implemented on separate chips.

RF (radio frequency) circuitry 208 receives and transmits RF signals, also known as electromagnetic signals. The RF circuitry 208 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 208 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuitry 208 optionally communicates with networks, such as the internet, also known as the World Wide Web (WWW), intranets, and/or wireless networks, such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), as well as other devices via wireless communications. The RF circuitry 208 optionally includes well-known circuitry for detecting Near Field Communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a number of communication standards, protocols, and technologies, including, but not limited to, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA +, dual cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth low power consumption (BTLE), wireless fidelity (Wi-Fi) (e.g., IEEE802.11 a, IEEE802.11b, IEEE802.11 g, IEEE802.11 n, and/or IEEE802.11 ac), voice over internet protocol (VoIP), Wi-MAX, email protocol (e.g., Internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol with extensions for instant messaging and presence (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed at the filing date of this document.

Audio circuitry 210, speaker 211, and microphone 213 provide an audio interface between a user and device 200. The audio circuit 210 receives audio data from the peripheral interface 218, converts the audio data to an electrical signal, and transmits the electrical signal to the speaker 211. The speaker 211 converts the electrical signal into a sound wave audible to a human. The audio circuit 210 also receives electrical signals converted from sound waves by the microphone 213. The audio circuit 210 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 218 for processing. The audio data may be retrieved from and/or transmitted to the memory 202 and/or the RF circuitry 208 by the peripheral interface 218. In some embodiments, the audio circuit 210 also includes a headset jack (e.g., 312 in fig. 3). The headset jack provides an interface between the audio circuitry 210 and a removable audio input/output peripheral such as an output-only headset or a headset having both an output (e.g., a monaural headset or a binaural headset) and an input (e.g., a microphone).

The I/O subsystem 206 couples input/output peripheral devices on the device 200, such as the touch screen 212 and other input control devices 216, to a peripheral interface 218. The I/O subsystem 206 optionally includes a display controller 256, an optical sensor controller 258, an intensity sensor controller 259, a haptic feedback controller 261, and one or more input controllers 260 for other input or control devices. The one or more input controllers 260 receive/transmit electrical signals from/to other input control devices 216. Other input control devices 216 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative embodiments, input controller 260 is optionally coupled to (or not coupled to) any of: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. The one or more buttons (e.g., 308 in fig. 3) optionally include an up/down button for volume control of the speaker 211 and/or microphone 213. The one or more buttons optionally include a push button (e.g., 306 in fig. 3).

Quick depression of the push button may disengage the lock of the touch screen 212 or begin the process of Unlocking the Device using a gesture on the touch screen, as described in U.S. patent application 11/322,549 entitled "Unlocking a Device by Performing a gesture on an Unlock Image," filed on 23.12.2005, which is hereby incorporated by reference in its entirety. Pressing the push button (e.g., 306) longer may turn the device 200 on or off. The user can customize the functionality of one or more buttons. The touch screen 212 is used to implement virtual or soft buttons and one or more soft keyboards.

The touch sensitive display 212 provides an input interface and an output interface between the device and the user. The display controller 256 receives electrical signals from the touch screen 212 and/or transmits electrical signals to the touch screen 212. Touch screen 212 displays visual output to a user. The visual output may include graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some implementations, some or all of the visual output may correspond to a user interface object.

Touch screen 212 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on tactile and/or haptic contact. Touch screen 212 and display controller 256 (along with any associated modules and/or sets of instructions in memory 202) detect contact (and any movement or breaking of the contact) on touch screen 212 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 212. In an exemplary embodiment, the point of contact between the touch screen 212 and the user corresponds to a finger of the user.

The touch screen 212 may use LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies may be used in other embodiments. Touch screen 212 and display controller 256 may detect contact and other elements using any of a variety of touch sensing technologies now known or later developed, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 212Any movement or interruption thereof, the various touch sensing technologies including, but not limited to, capacitive technologies, resistive technologies, infrared technologies, and surface acoustic wave technologies. In one exemplary embodiment, projected mutual capacitance sensing technology is used, such as that in Apple Inc. (Cupertino, California) And iPodThe technique found in (1).

The touch sensitive display in some embodiments of the touch screen 212 may be similar to the multi-touch sensitive trackpad described in the following U.S. patents: 6,323,846(Westerman et al), 6,570,557(Westerman et al) and/or 6,677,932 (Westerman); and/or U.S. patent publication 2002/0015024a1, each of which is hereby incorporated by reference in its entirety. However, touch screen 212 displays visual output from device 200, while touch sensitive trackpads do not provide visual output.

Touch sensitive displays in some embodiments of touch screen 212 may be as described in the following applications: (1) U.S. patent application No.11/381,313 entitled "Multipoint Touch Surface Controller" filed on 2.5.2006; (2) U.S. patent application No.10/840,862 entitled "Multipoint touch screen" filed on 6.5.2004; (3) U.S. patent application No.10/903,964 entitled "Gestures For Touch Sensitive Input Devices" filed on 30.7.2004; (4) U.S. patent application No.11/048,264 entitled "Gestures For Touch Sensitive Input Devices" filed on 31.1.2005; (5) U.S. patent application No.11/038,590 entitled "model-Based Graphical User Interfaces For Touch Sensitive Input Devices" filed on 18.1.2005; (6) U.S. patent application No.11/228,758 entitled "Virtual Input Device On A Touch Screen User Interface" filed On 16.9.2005; (7) U.S. patent application No.11/228,700 entitled "Operation Of A Computer With A Touch Screen Interface," filed on 16.9.2005; (8) U.S. patent application No.11/228,737 entitled "Activating Virtual Keys Of A Touch-Screen Virtual Keys" filed on 16.9.2005; and (9) U.S. patent application No.11/367,749 entitled "Multi-Functional Hand-Held Device" filed 3/2006. All of these patent applications are incorporated herein by reference in their entirety.

The touch screen 212 may have a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of about 160 dpi. The user may make contact with touch screen 212 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the action desired by the user.

In some embodiments, in addition to a touch screen, device 200 may include a touch pad (not shown) for activating or deactivating particular functions. In some embodiments, the trackpad is a touch-sensitive area of the device that, unlike a touchscreen, does not display visual output. The trackpad may be a touch-sensitive surface separate from the touch screen 212 or an extension of the touch-sensitive surface formed by the touch screen.

The device 200 also includes a power system 262 for powering the various components. The power system 262 may include a power management system, one or more power sources (e.g., batteries or Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light emitting diode), and any other components associated with the generation, management, and distribution of power in a portable device.

The device 200 may also include one or more optical sensors 264. Fig. 2A shows an optical sensor coupled to an optical sensor controller 258 in the I/O subsystem 206. The optical sensor 264 may include a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 264 receives light projected through one or more lenses from the environment and converts the light into data representing an image. In conjunction with the imaging module 243 (also referred to as a camera module), the optical sensor 264 may capture still images or video. In some embodiments, an optical sensor is located on the back of device 200 opposite touch screen display 212 on the front of the device, so that the touch screen display can be used as a viewfinder for still and/or video image acquisition. In some embodiments, the optical sensor is located in the front of the device so that images of the user may be obtained for the video conference while the user views other video conference participants on the touch screen display. In some implementations, the position of the optical sensor 264 can be changed by the user (e.g., by rotating a lens and sensor in the device housing) so that a single optical sensor 264 can be used with a touch screen display for both video conferencing and still image and/or video image capture.

Device 200 optionally further comprises one or more contact intensity sensors 265. FIG. 2A shows a contact intensity sensor coupled to intensity sensor controller 259 in I/O subsystem 206. Contact intensity sensor 265 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor 265 receives contact intensity information (e.g., pressure information or a surrogate for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is juxtaposed or adjacent to the touch-sensitive surface (e.g., touch-sensitive display system 212). In some embodiments, at least one contact intensity sensor is located on the back of device 200, opposite touch screen display 212, which is located on the front of device 200.

The device 200 may also include one or more proximity sensors 266. Fig. 2A shows a proximity sensor 266 coupled to the peripheral interface 218. Alternatively, the proximity sensor 266 may be coupled to the input controller 260 in the I/O subsystem 206. The proximity sensor 266 may be implemented as described in the following U.S. patent applications: no.11/241,839, entitled "Proximaty Detector In Handheld Device"; no.11/240,788, entitled "Proximaty Detector In Handheld Device"; no.11/620,702, entitled "Using Ambient Light Sensor To Automation Generator Sensor Output"; no.11/586,862, entitled "automatic Response To And Sensing Of User Activity In Portable Devices"; and No.11/638,251 entitled "Methods And Systems For Automatic Configuration Of Peripherals", which is hereby incorporated by reference in its entirety. In some embodiments, the proximity sensor turns off and disables the touch screen 212 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).

Device 200 optionally further comprises one or more tactile output generators 267. Fig. 2A shows a tactile output generator coupled to a tactile feedback controller 261 in the I/O subsystem 206. Tactile output generator 267 optionally includes one or more electro-acoustic devices such as speakers or other audio components; and/or an electromechanical device that converts energy into linear motion, such as a motor, solenoid, electroactive aggregator, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component for converting an electrical signal into a tactile output on the device). Contact intensity sensor 265 receives haptic feedback generation instructions from haptic feedback module 233 and generates haptic output on device 200 that can be felt by a user of device 200. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 212), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 200) or laterally (e.g., back and forth in the same plane as the surface of device 200). In some embodiments, at least one tactile output generator sensor is located on the back of device 200, opposite touch screen display 212, which is located on the front of device 200.

Device 200 may also include one or more accelerometers 268. Fig. 2A shows accelerometer 268 coupled to peripheral interface 218. Alternatively, accelerometer 268 may be coupled to input controller 260 in I/O subsystem 206. The Accelerometer 268 can be implemented as described in U.S. patent publication No.20050190059 entitled "Acceleration-Based Detection System For Portable Electronic Devices" And U.S. patent publication No.20060017692 entitled "Methods And applications For Operating A Portable Device Based On An Accelerometer", both of which are incorporated by reference herein in their entirety. In some embodiments, the information is displayed in a portrait view or a landscape view on the touch screen display based on analysis of data received from the one or more accelerometers. The device 200 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to the accelerometer 268 for obtaining information about the position and orientation (e.g., portrait or landscape) of the device 200.

In some embodiments, the software components stored in memory 202 include an operating system 226, a communication module (or set of instructions) 228, a contact/motion module (or set of instructions) 230, a graphics module (or set of instructions) 232, a text input module (or set of instructions) 234, a Global Positioning System (GPS) module (or set of instructions) 235, a digital assistant client module 229, and an application (or set of instructions) 236. Further, memory 202 may store data and models, such as user data and models 231. Further, in some embodiments, memory 202 (fig. 2A) or 470 (fig. 4) stores device/global internal state 257, as shown in fig. 2A and 4. Device/global internal state 257 includes one or more of: an active application state indicating which applications (if any) are currently active; display state indicating what applications, views, or other information occupy various areas of the touch screen display 212; sensor status, including information obtained from the various sensors of the device and the input control device 216; and location information regarding the location and/or pose of the device.

The operating system 226 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.

The communication module 228 facilitates communication with other devices through one or more external ports 224 and also includes various software components for processing data received by the RF circuitry 208 and/or the external ports 224. External port 224 (e.g., Universal Serial Bus (USB), firewire, etc.) is adapted to couple directly to other devices or indirectly through a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is an and(trademark of Apple inc.) a multi-pin (e.g., 30-pin) connector that is the same as or similar to and/or compatible with the 30-pin connector used on the device.

The contact/motion module 230 optionally detects contact with the touch screen 212 (in conjunction with the display controller 256) and other touch sensitive devices (e.g., a trackpad or physical click wheel). The contact/motion module 230 includes various software components for performing various operations related to the detection of contact, such as determining whether contact has occurred (e.g., detecting a finger-down event), determining the intensity of contact (e.g., the force or pressure of the contact, or a substitute for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-up event or a break in contact). The contact/motion module 230 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts) or multiple point simultaneous contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 230 and the display controller 256 detect contact on a touch pad.

In some embodiments, the contact/motion module 230 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (e.g., determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined as a function of software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 200). For example, the mouse "click" threshold of the trackpad or touch screen display may be set to any one of a wide range of predefined thresholds without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds of a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).

The contact/motion module 230 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at the location of the icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event.

Graphics module 232 includes various known software components for rendering and displaying graphics on touch screen 212 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual attributes) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.

In some embodiments, graphics module 232 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 232 receives one or more codes for specifying a graphic to be displayed, coordinate data and other graphic attribute data if necessary, from an application or the like, and then generates screen image data to output to the display controller 256.

The haptic feedback module 233 includes various software components for generating instructions for use by one or more haptic output generators 267 to produce haptic outputs at one or more locations on the device 200 in response to user interaction with the device 200.

Text input module 234, which may be a component of graphics module 232, provides a soft keyboard for entering text in various applications (e.g., contacts 237, email 240, IM 241, browser 247, and any other application that requires text input).

The GPS module 235 determines the location of the device and provides this information for use in various applications (e.g., to the phone 238 for use in location-based dialing; to the camera 243 as picture/video metadata; and to applications that provide location-based services, such as weather desktop applets, local yellow pages desktop applets, and map/navigation desktop applets).

The digital assistant client module 229 may include various client-side digital assistant instructions to provide client-side functionality of the digital assistant. For example, the digital assistant client module 229 may be capable of accepting sound input (e.g., voice input), text input, touch input, and/or gesture input through various user interfaces of the portable multifunction device 200 (e.g., the microphone 213, the accelerometer 268, the touch-sensitive display system 212, the optical sensor 264, the other input control device 216, etc.). The digital assistant client module 229 may also be capable of providing audio-form output (e.g., speech output), visual-form output, and/or tactile-form output through various output interfaces of the portable multifunction device 200 (e.g., the speaker 211, the touch-sensitive display system 212, the one or more tactile output generators 267, etc.). For example, the output may be provided as voice, sound, prompts, text messages, menus, graphics, video, animations, vibrations, and/or combinations of two or more of the foregoing. During operation, digital assistant client module 229 may use RF circuitry 208 to communicate with DA server 106.

The user data and model 231 may include various data associated with the user (e.g., user-specific vocabulary data, user preference data, user-specified name pronunciations, data from the user's electronic address book, to-do, shopping lists, etc.) to provide client-side functionality of the digital assistant. Further, the user data and models 231 may include various models (e.g., speech recognition models, statistical language models, natural language processing models, ontologies, task flow models, service models, etc.) for processing user input and determining user intent.

In some examples, the digital assistant client module 229 may utilize various sensors, subsystems, and peripherals of the portable multifunction device 200 to gather additional information from the surroundings of the portable multifunction device 200 to establish a context associated with the user, the current user interaction, and/or the current user input. In some examples, digital assistant client module 229 may provide the context information, or a subset thereof, to DA server 106 along with the user input to help infer the user intent. In some examples, the digital assistant may also use the context information to determine how to prepare and deliver the output to the user. The context information may be referred to as context data.

In some examples, contextual information accompanying the user input may include sensor information, such as lighting, ambient noise, ambient temperature, images or video of the surrounding environment, and so forth. In some examples, the context information may also include physical states of the device, such as device orientation, device location, device temperature, power level, velocity, acceleration, motion pattern, cellular signal strength, and the like. In some examples, information related to the software state of DA server 106 (e.g., running process, installed programs, past and current network activities, background services, error logs, resource usage, etc.) and information related to the software state of portable multifunction device 200 can be provided to DA server 106 as contextual information associated with user input.

In some examples, the digital assistant client module 229 may selectively provide information (e.g., user data 231) stored on the portable multifunction device 200 in response to a request from the DA server 106. In some examples, the digital assistant client module 229 may also elicit additional input from the user via a natural language dialog or other user interface upon request by the DA server 106. The digital assistant client module 229 may transmit this additional input to the DA server 106 to assist the DA server 106 in intent inference and/or to satisfy the user intent expressed in the user request.

The digital assistant is described in more detail below with reference to fig. 7A-7C. It should be appreciated that the digital assistant client module 229 may include any number of sub-modules of the digital assistant module 726 described below.

The applications 236 may include the following modules (or sets of instructions), or a subset or superset thereof:

a contacts module 237 (sometimes referred to as an address book or contact list);

a phone module 238;

a video conferencing module 239;

an email client module 240;

an Instant Messaging (IM) module 241;

fitness support module 242;

a camera module 243 for still and/or video images;

an image management module 244;

a video player module;

a music player module;

a browser module 247;

a calendar module 248;

desktop applet modules 249 that may include one or more of the following: a weather desktop applet 249-1, a stock market desktop applet 249-2, a calculator desktop applet 249-3, an alarm desktop applet 249-4, a dictionary desktop applet 249-5, and other desktop applets acquired by the user, and a desktop applet 249-6 created by the user;

a desktop applet creator module 250 for forming a user-created desktop applet 249-6;

A search module 251;

a video and music player module 252 that incorporates a video player module and a music player module;

a notepad module 253;

a map module 254; and/or

Online video module 255.

Examples of other applications 236 that may be stored in memory 202 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.

In conjunction with the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, and the text input module 234, the contacts module 237 may be used to manage an address book or contact list (e.g., stored in the application internal state 292 of the contacts module 237 in the memory 202 or the memory 470), including: adding a name to an address book; delete names from the address book; associating a telephone number, email address, physical address, or other information with a name; associating the image with a name; classifying and classifying names; providing a telephone number or email address to initiate and/or facilitate communication through telephone 238, video conference module 239, email 240, or IM 241; and so on.

In conjunction with the RF circuitry 208, audio circuitry 210, speaker 211, microphone 213, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, the phone module 238 may be used to enter a sequence of characters corresponding to a phone number, access one or more phone numbers in the contacts module 237, modify an entered phone number, dial a corresponding phone number, conduct a conversation, and disconnect or hang up when the conversation is complete. As described above, wireless communication may use any of a number of communication standards, protocols, and technologies.

In conjunction with the RF circuitry 208, the audio circuitry 210, the speaker 211, the microphone 213, the touch screen 212, the display controller 256, the optical sensor 264, the optical sensor controller 258, the contact/motion module 230, the graphics module 232, the text input module 234, the contacts module 237, and the phone module 238, the video conference module 239 includes executable instructions to initiate, conduct, and terminate video conferences between the user and one or more other participants according to user instructions.

In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, email client module 240 includes executable instructions to create, send, receive, and manage emails in response to user instructions. In conjunction with the image management module 244, the e-mail client module 240 makes it very easy to create and send an e-mail having a still image or a video image photographed by the camera module 243.

In conjunction with the RF circuitry 208, the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, and the text input module 234, the instant message module 241 includes executable instructions for: the method includes inputting a sequence of characters corresponding to an instant message, modifying previously input characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for a phone-based instant message, or using XMPP, SIMPLE, or IMPS for an internet-based instant message), receiving the instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages may include graphics, photos, audio files, video files, and/or other attachments supported in MMS and/or Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).

In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, GPS module 235, map module 254, and music player module, fitness support module 242 includes executable instructions for: creating fitness (e.g., having time, distance, and/or calorie burning goals); communicating with fitness sensors (sports equipment); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for fitness; and displaying, storing and transmitting fitness data.

In conjunction with the touch screen 212, the display controller 256, the one or more optical sensors 264, the optical sensor controller 258, the contact/motion module 230, the graphics module 232, and the image management module 244, the camera module 243 includes executable instructions for: capturing still images or video (including video streams) and storing them in the memory 202, modifying features of the still images or video, or deleting the still images or video from the memory 202.

In conjunction with the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, the text input module 234, and the camera module 243, the image management module 244 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.

In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, browser module 247 includes executable instructions to browse the internet (including searching for, linking to, receiving and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.

In conjunction with the RF circuitry 208, the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, the text input module 234, the email client module 240, and the browser module 247, the calendar module 248 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.

In conjunction with the RF circuitry 208, the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, the text input module 234, and the browser module 247, the desktop applet module 249 is a mini-application (e.g., a weather desktop applet 249-1, a stock desktop applet 249-2, a calculator desktop applet 249-3, an alarm desktop applet 249-4, and a dictionary desktop applet 249-5) or a mini-application created by a user (e.g., a user-created desktop applet 249-6) that may be downloaded and used by the user. In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo! desktop applet).

In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, and browser module 247, the desktop applet creator module 250 may be used by a user to create a desktop applet (e.g., to turn a user-specified portion of a web page into a desktop applet).

In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, and text input module 234, search module 251 includes executable instructions to search memory 202 for text, music, sound, images, videos, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) based on user instructions.

In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, audio circuitry 210, speakers 211, RF circuitry 208, and browser module 247, video and music player module 252 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, presenting, or otherwise playing back video (e.g., on touch screen 212 or on an external display connected via external port 224). In some embodiments, the device 200 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).

In conjunction with the touch screen 212, the display controller 256, the contact/motion module 230, the graphics module 232, and the text input module 234, the notepad module 253 includes executable instructions to create and manage notepads, backlogs, and the like according to user instructions.

In conjunction with RF circuitry 208, touch screen 212, display controller 256, contact/motion module 230, graphics module 232, text input module 234, GPS module 235, and browser module 247, map module 254 may be used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data related to stores and other points of interest at or near a particular location, and other location-based data) according to user instructions.

In conjunction with touch screen 212, display controller 256, contact/motion module 230, graphics module 232, audio circuit 210, speaker 211, RF circuit 208, text input module 234, email client module 240, and browser module 247, online video module 255 includes instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on the touch screen or on an external display connected via external port 224), send emails with links to particular online videos, and otherwise manage online videos in one or more file formats, such as h.264. In some embodiments, the link to a particular online video is sent using instant messaging module 241 instead of email client module 240. Additional descriptions of Online video applications may be found in U.S. provisional patent application No.60/936,562 entitled "Portable multimedia Device, Method, and Graphical User Interface for Playing Online video," filed on year 2007 at 20.6, and U.S. patent application No.11/968,067 entitled "Portable multimedia Device, Method, and Graphical User Interface for Playing Online video," filed on year 2007 at 31.12, which are both hereby incorporated by reference in their entirety.

Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this patent application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. For example, a video player module may be combined with a music player module into a single module (e.g., video and music player module 252 in fig. 2A). In some embodiments, memory 202 may store a subset of the modules and data structures described above. Further, memory 202 may store additional modules and data structures not described above.

In some embodiments, device 200 is a device in which the operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a trackpad. By using a touch screen and/or touch pad as the primary input control device for operation of device 200, the number of physical input control devices (such as push buttons, dials, and the like) on device 200 may be reduced.

The set of predefined functions performed exclusively by the touchscreen and/or trackpad optionally includes navigating between user interfaces. In some embodiments, the trackpad, when touched by a user, navigates device 200 from any user interface displayed on device 200 to a main, home, or root menu. In such embodiments, a "menu button" is implemented using a touch pad. In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touch pad.

Fig. 2B is a block diagram illustrating exemplary components for event processing, according to some embodiments. In some embodiments, memory 202 (fig. 2A) or memory 470 (fig. 4) includes event classifier 270 (e.g., in operating system 226) and corresponding application 236-1 (e.g., any of the aforementioned applications 237 through 251, 255, 480 through 490).

The event classifier 270 receives the event information and determines the application 236-1 to which the event information is to be delivered and the application view 291 of the application 236-1. The event sorter 270 includes an event monitor 271 and an event dispatcher module 274. In some embodiments, the application 236-1 includes an application internal state 292 that indicates a current application view that is displayed on the touch-sensitive display 212 when the application is active or executing. In some embodiments, device/global internal state 257 is used by event classifier 270 to determine which application(s) are currently active, and application internal state 292 is used by event classifier 270 to determine application view 291 to which to deliver event information.

In some embodiments, the application internal state 292 includes additional information, such as one or more of the following: resume information to be used when the application 236-1 resumes execution, user interface state information indicating information being displayed by the application 236-1 or information ready for display by the application, a state queue for enabling a user to return to a previous state or view of the application 236-1, and a repeat/undo queue of previous actions taken by the user.

The event monitor 271 receives event information from the peripheral interface 218. The event information includes information about a sub-event (e.g., a user touch on the touch-sensitive display 212 as part of a multi-touch gesture). Peripherals interface 218 transmits information it receives from I/O subsystem 206 or sensors such as proximity sensor 266, accelerometer 268, and/or microphone 213 (through audio circuitry 210). Information received by peripheral interface 218 from I/O subsystem 206 includes information from touch-sensitive display 212 or a touch-sensitive surface.

In some embodiments, event monitor 271 sends requests to peripheral interface 218 at predetermined intervals. In response, peripheral interface 218 transmits event information. In other embodiments, peripheral interface 218 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or receiving more than a predetermined duration).

In some embodiments, event classifier 270 also includes hit view determination module 272 and/or activity event recognizer determination module 273.

When the touch-sensitive display 212 displays more than one view, the hit view determination module 272 provides a software process for determining where within one or more views a sub-event has occurred. The view consists of controls and other elements that the user can see on the display.

Another aspect of a user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected may correspond to a programmatic level within a programmatic or view hierarchy of applications. For example, the lowest hierarchical view in which a touch is detected may be referred to as a hit view, and the set of events identified as correct inputs may be determined based at least in part on the hit view of the initial touch that began the touch-based gesture.

Hit view determination module 272 receives information related to sub-events of the contact-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 272 identifies the hit view as the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (e.g., the first sub-event in the sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by hit view determination module 272, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.

The activity event identifier determination module 273 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some implementations, the activity event recognizer determination module 273 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the active event recognizer determination module 273 determines that all views including the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely confined to the area associated with a particular view, the higher views in the hierarchy will remain actively participating views.

Event dispatcher module 274 dispatches event information to event recognizers (e.g., event recognizer 280). In embodiments that include the activity event recognizer determination module 273, the event dispatcher module 274 delivers the event information to the event recognizer determined by the activity event recognizer determination module 273. In some embodiments, the event dispatcher module 274 stores event information in an event queue, which is retrieved by the respective event receiver 282.

In some embodiments, the operating system 226 includes an event classifier 270. Alternatively, the application 236-1 includes an event classifier 270. In other embodiments, the event classifier 270 is a stand-alone module or is part of another module (such as the contact/motion module 230) that is stored in the memory 202.

In some embodiments, the application 236-1 includes a plurality of event handlers 290 and one or more application views 291, where each application view includes instructions for processing touch events that occur within a respective view of the application's user interface. Each application view 291 of the application 236-1 includes one or more event recognizers 280. Typically, the respective application view 291 includes a plurality of event recognizers 280. In other embodiments, one or more of the event recognizers 280 are part of a separate module, such as a user interface toolkit (not shown) or a higher level object from which the application 236-1 inherits methods and other properties. In some embodiments, the respective event handlers 290 include one or more of: data updater 276, object updater 277, GUI updater 278, and/or event data 279 received from event classifier 270. Event handler 290 may utilize or call data updater 276, object updater 277 or GUI updater 278 to update application internal state 292. Alternatively, one or more of the application views 291 include one or more respective event handlers 290. Additionally, in some embodiments, one or more of the data updater 276, the object updater 277, and the GUI updater 278 are included in a respective application view 291.

The corresponding event identifier 280 receives event information (e.g., event data 279) from the event classifier 270 and identifies events from the event information. Event recognizer 280 includes an event receiver 282 and an event comparator 284. In some embodiments, event recognizer 280 also includes at least a subset of: metadata 283 and event delivery instructions 288 (which may include sub-event delivery instructions).

Event receiver 282 receives event information from event sorter 270. The event information includes information about a sub-event such as a touch or touch movement. According to the sub-event, the event information further includes additional information, such as the location of the sub-event. When the sub-event relates to motion of a touch, the event information may also include the velocity and direction of the sub-event. In some embodiments, the event comprises rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).

Event comparator 284 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event or determines or updates the state of an event or sub-event. In some embodiments, event comparator 284 includes an event definition 286. The event definition 286 contains definitions of events (e.g., predefined sub-event sequences), such as event 1(287-1), event 2(287-2), and other events. In some embodiments, sub-events in event (287) include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition for event 1(287-1) is a double click on the displayed object. For example, a double tap includes a first touch (touch start) on the displayed object for a predetermined length of time, a first lift-off (touch end) for a predetermined length of time, a second touch (touch start) on the displayed object for a predetermined length of time, and a second lift-off (touch end) for a predetermined length of time. In another example, the definition for event 2(287-2) is a drag on the displayed object. For example, dragging includes a predetermined length of time of touch (or contact) on a displayed object, movement of the touch across the touch-sensitive display 212, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 290.

In some embodiments, the event definitions 287 include definitions of events for respective user interface objects. In some embodiments, event comparator 284 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on the touch-sensitive display 212, when a touch is detected on the touch-sensitive display 212, the event comparator 284 performs a hit-test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a corresponding event handler 290, the event comparator uses the results of the hit test to determine which event handler 290 should be activated. For example, the event comparator 284 selects the event handler associated with the sub-event and the object that triggered the hit test.

In some embodiments, the definition for the respective event 287 further includes a delay action that delays delivery of the event information until it is determined whether the sequence of sub-events corresponds to the event type of the event recognizer.

When the respective event recognizer 280 determines that the sequence of sub-events does not match any event in the event definition 286, the respective event recognizer 280 enters an event not possible, event failed, or event ended state, after which subsequent sub-events of the touch-based gesture are ignored. In this case, the other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.

In some embodiments, the respective event recognizer 280 includes metadata 283 with configurable attributes, tags, and/or lists that indicate how the event delivery system should perform the delivery of sub-events to actively participating event recognizers. In some embodiments, metadata 283 includes configurable attributes, flags, and/or lists that indicate how or how event recognizers may interact with each other. In some embodiments, metadata 283 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is delivered to different levels in a view or programmatic hierarchy.

In some embodiments, when one or more particular sub-events of an event are identified, the respective event identifier 280 activates the event handler 290 associated with the event. In some embodiments, the respective event identifier 280 delivers event information associated with the event to the event handler 290. Activating the event handler 290 is different from sending (and deferring) sub-events to the corresponding hit view. In some embodiments, event recognizer 280 throws a flag associated with the recognized event, and event handler 290 associated with the flag catches the flag and performs a predefined process.

In some embodiments, the event delivery instructions 288 include sub-event delivery instructions that deliver event information about sub-events without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively participating views. Event handlers associated with the series of sub-events or with actively participating views receive the event information and perform a predetermined process.

In some embodiments, the data updater 276 creates and updates data used in the application 236-1. For example, the data updater 276 updates a phone number used in the contacts module 237 or stores a video file used in the video player module. In some embodiments, the object updater 277 creates and updates objects used in the application 236-1. For example, object updater 277 creates a new user interface object or updates the location of a user interface object. The GUI updater 278 updates the GUI. For example, GUI updater 278 prepares display information and sends the display information to graphics module 232 for display on the touch-sensitive display.

In some embodiments, event handler 290 includes or has access to data updater 276, object updater 277, and GUI updater 278. In some embodiments, the data updater 276, the object updater 277, and the GUI updater 278 are included in a single module of the respective application 236-1 or application view 291. In other embodiments, they are included in two or more software modules.

It should be understood that the above discussion of event processing with respect to user touches on a touch sensitive display also applies to other forms of user input utilizing an input device to operate multifunction device 200, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses, optionally in conjunction with single or multiple keyboard presses or holds; contact movements on the touchpad, such as tapping, dragging, scrolling, etc.; inputting by a stylus; movement of the device; verbal instructions; detected eye movement; inputting biological characteristics; and/or any combination thereof, is optionally used as input corresponding to sub-events defining the event to be identified.

FIG. 3 illustrates a portable multifunction device 200 with a touch screen 212 in accordance with some embodiments. The touch screen optionally displays one or more graphics within a User Interface (UI) 300. In this embodiment, as well as other embodiments described below, a user can select one or more of these graphics by making gestures on the graphics, for example, with one or more fingers 302 (not drawn to scale in the figures) or with one or more styluses 303 (not drawn to scale in the figures). In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a rolling of a finger (right to left, left to right, up, and/or down) that has made contact with device 200. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that swipes over an application icon optionally does not select the corresponding application when the gesture corresponding to the selection is a tap.

Device 200 may also include one or more physical buttons, such as a "home screen" button or menu button 304. As previously described, the menu button 304 may be used to navigate to any application 236 in the set of applications that may be executed on the device 200. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 212.

In one embodiment, device 200 includes a touch screen 212, menu buttons 304, a push button 306 for powering the device on/off and for locking the device, one or more volume adjustment buttons 308, a Subscriber Identity Module (SIM) card slot 310, a headset jack 312, and a docking/charging external port 224. Pressing the button 306 optionally serves to turn the device on/off by pressing the button and holding the button in a pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In an alternative embodiment, device 200 also accepts verbal input through microphone 213 for activating or deactivating some functions. Device 200 also optionally includes one or more contact intensity sensors 265 for detecting the intensity of contacts on touch screen 212, and/or one or more tactile output generators 267 for generating tactile outputs for a user of device 200.

FIG. 4 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 400 need not be portable. In some embodiments, the device 400 is a laptop computer, desktop computer, tablet computer, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home controller or industrial controller). Device 400 typically includes one or more processing units (CPUs) 410, one or more network or other communication interfaces 460, memory 470, and one or more communication buses 420 for interconnecting these components. The communication bus 420 optionally includes circuitry (sometimes called a chipset) that interconnects and controls communication between system components. Device 400 includes an input/output (I/O) interface 430 with a display 440, which is typically a touch screen display. The I/O interface 430 also optionally includes a keyboard and/or mouse (or other pointing device) 450 and a trackpad 455, a tactile output generator 457 (e.g., similar to tactile output generator 267 described above with reference to fig. 2A), a sensor 459 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch-sensitive sensor, and/or a contact intensity sensor (similar to contact intensity sensor 265 described above with reference to fig. 2A)) for generating tactile outputs on the device 400. Memory 470 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 470 optionally includes one or more storage devices located remotely from CPU 410. In some embodiments, memory 470 stores programs, modules, and data structures similar to or a subset of the programs, modules, and data structures stored in memory 202 of portable multifunction device 200 (fig. 2A). In addition, memory 470 optionally stores additional programs, modules, and data structures not present in memory 202 of portable multifunction device 200. For example, memory 470 of device 400 optionally stores drawing module 480, presentation module 482, word processing module 484, website creation module 486, disk editing module 488, and/or spreadsheet module 490, while memory 202 of portable multifunction device 200 (FIG. 2A) optionally does not store these modules.

Each of the above elements in fig. 4 may be stored in one or more of the aforementioned memory devices. Each of the above identified modules corresponds to a set of instructions for performing a function described above. The modules or programs (e.g., sets of instructions) described above need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some embodiments, memory 470 may store a subset of the modules and data structures described above. In addition, memory 470 may store additional modules and data structures not described above.

Attention is now directed to embodiments of user interfaces that may be implemented on, for example, portable multifunction device 200.

Fig. 5A illustrates an exemplary user interface for a menu of applications on a portable multifunction device 200 according to some embodiments. A similar user interface may be implemented on device 400. In some embodiments, the user interface 500 includes the following elements, or a subset or superset thereof:

one or more signal strength indicators 502 for one or more wireless communications (such as cellular signals and Wi-Fi signals);

Time 504;

a bluetooth indicator 505;

a battery status indicator 506;

tray 508 with icons for common applications, such as:

an icon 516 of the phone module 238 labeled "phone", optionally including an indicator 514 of the number of missed calls or voicemail messages;

an icon 518 of the email client module 240 labeled "mail", optionally including an indicator 510 of the number of unread emails;

icon 520 of browser module 247 labeled "browser"; and

an icon 522 labeled "iPod" of the video and music player module 252 (also referred to as iPod (trademark of Apple inc.) module 252); and

icons for other applications, such as:

icon 524 of IM module 241, labeled "message";

icon 526 of calendar module 248 labeled "calendar";

icon 528 of image management module 244 labeled "photo";

icon 530 labeled "camera" for camera module 243;

icon 532 labeled "online video" for online video module 255;

an icon 534 labeled "stock market" of the stock market desktop applet 249-2;

Icon 536 of map module 254 labeled "map";

icon 538 of weather desktop applet 249-1 labeled "weather";

icon 540 labeled "clock" for alarm clock desktop applet 249-4;

icon 542 of fitness support module 242 labeled "fitness support";

icon 544 labeled "notepad" for notepad module 253; and

an icon 546 of the settings application or module, labeled "settings," that provides access to the settings of the appliance 200 and its various applications 236.

Note that the icon labels shown in fig. 5A are merely exemplary. For example, icon 522 of video and music player module 252 may optionally be labeled "music" or "music player". Other labels are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.

Fig. 5B illustrates an exemplary user interface on a device (e.g., device 400 of fig. 4) having a touch-sensitive surface 551 (e.g., tablet or trackpad 455 of fig. 4) separate from a display 550 (e.g., touchscreen display 212). The device 400 also optionally includes one or more contact intensity sensors (e.g., one or more of the sensors 457) for detecting the intensity of contacts on the touch-sensitive surface 551 and/or one or more tactile output generators 459 for generating tactile outputs for a user of the device 400.

Although some of the examples that follow will be given with reference to input on the touch screen display 212 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 5B. In some implementations, the touch-sensitive surface (e.g., 551 in fig. 5B) has a major axis (e.g., 552 in fig. 5B) that corresponds to a major axis (e.g., 553 in fig. 5B) on the display (e.g., 550). According to these embodiments, the device detects contacts (e.g., 560 and 562 in fig. 5B) with the touch-sensitive surface 551 at locations that correspond to respective locations on the display (e.g., 560 corresponds to 568 and 562 corresponds to 570 in fig. 5B). In this way, when the touch-sensitive surface (e.g., 551 in FIG. 5B) is separated from the display (550 in FIG. 5B) of the multifunction device, user inputs (e.g., contacts 560 and 562 and their movements) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.

Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, single-finger tap gestures, and/or finger swipe gestures), it should be understood that in some embodiments, one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, the flick gesture is optionally replaced by a mouse click (e.g., rather than detection of contact followed by ceasing to detect contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.

Fig. 6A illustrates an exemplary personal electronic device 600. The device 600 includes a body 602. In some embodiments, device 600 may include some or all of the features described with respect to devices 200 and 400 (e.g., fig. 2A-4). In some embodiments, device 600 has a touch-sensitive display screen 604, hereinafter referred to as touch screen 604. Alternatively or in addition to the touch screen 604, the device 600 has a display and a touch-sensitive surface. As with device 200 and device 400, in some embodiments, touch screen 604 (or touch-sensitive surface) may have one or more intensity sensors for detecting the intensity of a contact (e.g., touch) being applied. One or more intensity sensors of touch screen 604 (or touch-sensitive surface) may provide output data representing the intensity of a touch. The user interface of device 600 may respond to a touch based on its intensity, meaning that different intensities of touches may invoke different user interface operations on device 600.

Techniques for detecting and processing touch intensities can be found, for example, in related applications: international patent Application No. PCT/US2013/040061 entitled "Device, Method, and Graphical User Interface for Displaying User Interface Objects reforming to an Application", filed on 8.5.2013, and International patent Application No. PCT/US2013/069483 entitled "Device, Method, and Graphical User Interface for translating Betwen Touch Input Display Output references", filed on 11.11.2013, each of which is hereby incorporated by reference in its entirety.

In some embodiments, device 600 has one or more input mechanisms 606 and 608. Input mechanisms 606 and 608 (if included) may be in physical form. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device 600 has one or more attachment mechanisms. Such attachment mechanisms, if included, may allow device 600 to be attached with, for example, a hat, glasses, earrings, necklace, shirt, jacket, bracelet, watchband, bracelet, pants, belt, shoe, purse, backpack, and the like. These attachment mechanisms may allow the user to wear the device 600.

Fig. 6B illustrates an exemplary personal electronic device 600. In some embodiments, the apparatus 600 may include some or all of the components described with reference to fig. 2A, 2B, and 4. The device 600 has a bus 612 that operatively couples the I/O portion 614 with one or more computer processors 616 and a memory 618. I/O portion 614 may be connected to display 604, which may have touch sensitive component 622 and optionally touch intensity sensitive component 624. Further, I/O portion 614 may connect with communications unit 630 for receiving applications and operating system data using Wi-Fi, bluetooth, Near Field Communication (NFC), cellular, and/or other wireless communication technologies. Device 600 may include input mechanisms 606 and/or 608. For example, input mechanism 606 may be a rotatable input device or a depressible and rotatable input device. In some examples, input mechanism 608 may be a button.

In some examples, input mechanism 608 may be a microphone. The personal electronic device 600 may include various sensors, such as a GPS sensor 632, an accelerometer 634, an orientation sensor 640 (e.g., a compass), a gyroscope 636, a motion sensor 638, and/or combinations thereof, all of which may be operatively connected to the I/O section 614.

The memory 618 of the personal electronic device 600 may be a non-transitory computer-readable storage medium for storing computer-executable instructions that, when executed by the one or more computer processors 616, may, for example, cause the computer processors to perform the techniques described below, including the process 1200 (fig. 12A-12D). The computer-executable instructions may also be stored and/or transmitted within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. The personal electronic device 600 is not limited to the components and configuration of fig. 6B, but may include other components or additional components in a variety of configurations.

As used herein, the term "affordance" refers to a user-interactive graphical user interface object that may be displayed on a display screen of device 200, 400, and/or 600 (FIGS. 2, 4, and 6). For example, images (e.g., icons), buttons, and text (e.g., links) can each constitute an affordance.

As used herein, the term "focus selector" refers to an input element that is used to indicate the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a "focus selector" such that, if an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 455 in fig. 4 or touch-sensitive surface 551 in fig. 5B) while the cursor is resting over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 212 in fig. 2A or touch screen 212 in fig. 5A) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without corresponding movement of a cursor or movement of a contact on the touch screen display (e.g., by moving the focus from one button to another using tab or arrow keys); in these implementations, the focus selector moves according to movement of the focus between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by the user to deliver the user's intended interaction with the user interface (e.g., by indicating to the device the element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a trackpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (as opposed to other user interface elements shown on the display of the device).

As used in the specification and in the claims, the term "characteristic intensity" of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples sampled during a predetermined time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detecting contact, before detecting contact lift, before or after detecting contact start movement, before or after detecting contact end, before or after detecting an increase in intensity of contact, and/or before or after detecting a decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of contact strength, mean value of contact strength, average value of contact strength, value at the first 10% of contact strength, half maximum value of contact strength, 90% maximum value of contact strength, and the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., where the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed the first threshold results in a first operation, a contact whose characteristic intensity exceeds the first intensity threshold but does not exceed the second intensity threshold results in a second operation, and a contact whose characteristic intensity exceeds the second threshold results in a third operation. In some embodiments, the comparison between the feature strengths and the one or more thresholds is used to determine whether to perform the one or more operations (e.g., whether to perform the respective operation or to forgo performing the respective operation), rather than to determine whether to perform the first operation or the second operation.

In some implementations, a portion of the gesture is recognized for determining the feature intensity. For example, the touch-sensitive surface may receive a continuous swipe contact that transitions from a starting location and reaches an ending location where the intensity of the contact increases. In this example, the characteristic strength of the contact at the end position may be based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only a portion of the swipe contact at the end position). In some implementations, a smoothing algorithm may be applied to the intensity of the swipe gesture before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: a non-weighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contacts for determining the feature intensity.

The intensity of a contact on the touch-sensitive surface may be characterized relative to one or more intensity thresholds, such as a contact detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. In some embodiments, the light press intensity threshold corresponds to an intensity that: at this intensity, the device will perform the operations typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity that: at which intensity the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, when a contact is detected whose characteristic intensity is below a light press intensity threshold (e.g., and above a nominal contact detection intensity threshold below which no contact is detected anymore), the device will move the focus selector in accordance with the movement of the contact on the touch-sensitive surface without performing operations associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface drawings.

The increase in the characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a "light press" input. Increasing the contact characteristic intensity from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a "deep press" input. An increase in the characteristic intensity of the contact from an intensity below the contact detection intensity threshold to an intensity between the contact detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting a contact on the touch surface. The decrease in the characteristic intensity of the contact from an intensity above the contact detection intensity threshold to an intensity below the contact detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch surface. In some embodiments, the contact detection intensity threshold is zero. In some embodiments, the contact detection intensity threshold is greater than zero.

In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting that the respective contact intensity increases above a press input intensity threshold (e.g., a "down stroke" of the respective press input). In some embodiments, the press input includes an increase in the respective contact intensity above a press input intensity threshold and a subsequent decrease in the contact intensity below the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in the respective contact intensity below the press input threshold (e.g., an "up stroke" of the respective press input).

In some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes a respective contact intensity increasing above a press input intensity threshold and a subsequent decrease in the contact intensity below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in the respective contact intensity below the hysteresis intensity threshold (e.g., an "up stroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., an increase in intensity of the contact or a decrease in intensity of the contact, depending on the circumstances).

For ease of explanation, optionally, a description of an operation performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is triggered in response to detection of any of the following: the contact intensity increases above the press input intensity threshold, the contact intensity increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the contact intensity decreases below the press input intensity threshold, and/or the contact intensity decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples in which operations are described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operations are optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold.

3. Digital assistant system

Fig. 7A illustrates a block diagram of a digital assistant system 700, according to various examples. In some examples, the digital assistant system 700 may be implemented on a stand-alone computer system. In some examples, the digital assistant system 700 may be distributed across multiple computers. In some examples, some of the modules and functionality of a digital assistant can be divided into a server portion and a client portion, where the client portion is located on one or more user devices (e.g., devices 104, 122, 200, 400, or 600) and communicates with the server portion (e.g., server system 108) over one or more networks, e.g., as shown in fig. 1. In some examples, digital assistant system 700 may be a specific implementation of server system 108 (and/or DA server 106) shown in fig. 1. It should be noted that the digital assistant system 700 is only one example of a digital assistant system, and that the digital assistant system 700 may have more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of components. The various components shown in fig. 7A may be implemented in hardware, software instructions for execution by one or more processors, firmware (including one or more signal processing integrated circuits and/or application specific integrated circuits), or a combination thereof.

The digital assistant system 700 can include a memory 702, one or more processors 704, input/output (I/O) interfaces 706, and a network communication interface 708. These components may communicate with each other via one or more communication buses or signal lines 710.

In some examples, the memory 702 can include a non-transitory computer-readable medium, such as high-speed random access memory and/or a non-volatile computer-readable storage medium (e.g., one or more disk storage devices, flash memory devices, or other non-volatile solid-state memory devices).

In some examples, I/O interface 706 may couple input/output devices 716 of digital assistant system 700, such as a display, a keyboard, a touch screen, and a microphone, to user interface module 722. I/O interface 706 in conjunction with user interface module 722 may receive user inputs (e.g., voice inputs, keyboard inputs, touch inputs, etc.) and process those inputs accordingly. In some examples, such as when the digital assistant is implemented at a standalone user device, the digital assistant system 700 can include any of the components and I/O communication interfaces described with respect to the devices 200, 400, or 600 in fig. 2A, 4, 6A-6B, respectively. In some examples, the digital assistant system 700 may represent a server portion of a digital assistant implementation and may interact with a user through a client-side portion located on a user device (e.g., device 104, 200, 400, or device 600).

In some examples, the network communication interface 708 may include one or more wired communication ports 712, and/or wireless transmission and reception circuitry 714. The one or more wired communication ports 712 can receive and transmit communication signals via one or more wired interfaces, such as ethernet, Universal Serial Bus (USB), firewire, and the like. The wireless circuitry 714 may receive RF signals and/or optical signals from, and transmit RF signals and/or optical signals to, communication networks and other communication devices. The wireless communication may use any of a variety of communication standards, protocols, and technologies, such as GSM, EDGE, CDMA, TDMA, Bluetooth, Wi-Fi, VoIP, Wi-MAX, or any other suitable communication protocol. Network communication interface 708 may enable communication between digital assistant system 700 and other devices over a network, such as the internet, an intranet, and/or a wireless network, such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN).

In some examples, memory 702 or the computer-readable storage medium of memory 702 may store programs, modules, instructions, and data structures, including all or a subset of the following: an operating system 718, a communications module 720, a user interface module 722, one or more applications 724, and a digital assistant module 726. In particular, memory 702 or the computer-readable storage medium of memory 702 may store instructions for performing process 1200 described below. The one or more processors 704 may execute the programs, modules, and instructions and read data from, or write data to, the data structures.

An operating system 718 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) may include various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware, firmware, and software components.

The communication module 720 may facilitate communications between the digital assistant system 700 and other devices via the network communication interface 708. For example, the communication module 720 may communicate with the RF circuitry 208 of an electronic device, such as the devices 200, 400, and 600 shown in fig. 2A, 4, 6A-6B, respectively. The communications module 720 may also include various components for processing data received by the wireless circuitry 714 and/or the wired communications port 712.

User interface module 722 may receive commands and/or input from a user (e.g., from a keyboard, touch screen, pointing device, controller, and/or microphone) via I/O interface 706 and generate user interface objects on the display. User interface module 722 may also prepare and deliver output (e.g., voice, sound, animation, text, icons, vibration, haptic feedback, lighting, etc.) to the user via I/O interface 706 (e.g., through a display, audio channel, speaker, touch pad, etc.).

The applications 724 may include programs and/or modules configured to be executed by the one or more processors 704. For example, if the digital assistant system is implemented at a standalone user device, the applications 724 may include user applications, such as games, calendar applications, navigation applications, or email applications. If the digital assistant system 700 is implemented on a server, the applications 724 may include, for example, a resource management application, a diagnostic application, or a scheduling application.

The memory 702 may also store a digital assistant module 726 (or a server portion of a digital assistant). In some examples, digital assistant module 726 may include the following sub-modules, or a subset or superset thereof: an input/output processing module 728, a Speech To Text (STT) processing module 730, a natural language processing module 732, a conversation stream processing module 734, a task stream processing module 736, a services processing module 738, and a speech synthesis module 740. Each of these modules may have access to one or more of the following systems or data and models, or a subset or superset thereof, of the digital assistant module 726: ontology 760, vocabulary index 744, user data 748, task flow model 754, service model 756, and ASR system 731.

In some examples, using the processing modules, data, and models implemented in the digital assistant module 726, the digital assistant can perform at least some of the following: converting the speech input to text; identifying a user intent expressed in a natural language input received from a user; proactively elicit and obtain the information needed to fully infer the user's intent (e.g., by disambiguating words, names, intentions, etc.); determining a task flow for satisfying the inferred intent; and executing the task flow to satisfy the inferred intent.

In some examples, as shown in fig. 7B, I/O processing module 728 may interact with a user via I/O device 716 in fig. 7A or with a user device (e.g., device 104, 200, 400, or 600) via network communication interface 708 in fig. 7A to obtain user input (e.g., voice input) and provide a response to the user input (e.g., as voice output). The I/O processing module 728 may optionally obtain contextual information associated with the user input from the user device along with or shortly after receiving the user input. The contextual information may include user-specific data, vocabulary, and/or preferences related to user input. In some examples, the context information also includes software and hardware states of the user device at the time the user request is received, and/or information relating to the user's surroundings at the time the user request is received. In some examples, the I/O processing module 728 may also send follow-up questions to the user related to the user request and receive answers from the user. When a user request is received by the I/O processing module 728 and the user request may include speech input, the I/O processing module 728 may forward the speech input to the STT processing module 730 (or speech recognizer) for speech-to-text conversion.

STT processing module 730 may include one or more ASR systems. The one or more ASR systems may process speech input received through I/O processing module 728 to generate recognition results. Each ASR system may include a front-end speech preprocessor. The front-end speech preprocessor may extract representative features from the speech input. For example, a front-end speech preprocessor may perform a fourier transform on a speech input to extract spectral features characterizing the speech input as a sequence of representative multi-dimensional vectors. Further, each ASR system may include one or more speech recognition models (e.g., acoustic models and/or language models), and may implement one or more speech recognition engines. Examples of speech recognition models may include hidden markov models, gaussian mixture models, deep neural network models, n-gram language models, and other statistical models. Examples of speech recognition engines may include dynamic time warping based engines and Weighted Finite State Transducer (WFST) based engines. One or more speech recognition models and one or more speech recognition engines may be used to process the extracted representative features of the front-end speech preprocessor to produce intermediate recognition results (e.g., phonemes, phoneme strings, and subwords) and, ultimately, text recognition results (e.g., words, word strings, or sequences of symbols). In some examples, the voice input may be processed at least in part by a third party service or on a device of the user (e.g., device 104, 200, 400, or 600) to produce a recognition result. Once STT processing module 730 generates a recognition result that includes a text string (e.g., a word, or a sequence of words, or a sequence of symbols), the recognition result may be communicated to natural language processing module 732 for intent inference.

More details regarding the processing of Speech to text are described in U.S. utility model patent application No.13/236,942 entitled "Consolidating Speech recognitions Results," filed on 20/9/2011, the entire disclosure of which is incorporated herein by reference.

In some examples, STT processing module 730 may include a vocabulary of recognizable words and/or may access the vocabulary via speech alphabet conversion module 731. Each vocabulary word may be associated with one or more candidate pronunciations of the word represented in a speech recognition phonetic alphabet. In particular, the vocabulary of recognizable words may include words associated with multiple candidate pronunciations. For example, the vocabulary may includeAndthe word "tomato" associated with the candidate pronunciation. Further, the vocabulary words may be associated with custom candidate pronunciations based on previous speech input from the user. Such custom candidate pronunciations can be stored in STT processing module 730 and can be associated with a particular user via a user profile on the device. In some examples, candidate pronunciations for a word may be determined based on the spelling of the word and one or more linguistic and/or phonetic rules. In some examples, the candidate pronunciation may be generated manually, e.g., based on a known canonical pronunciation.

In some examples, the candidate pronunciations may be ranked based on commonality of the candidate pronunciations. For example, candidate speechMay be ranked higher thanAs the former is a more common pronunciation (e.g., among all users, for users in a particular geographic area, or for any other suitable subset of users). In some examples, the candidate pronunciations may be ranked based on whether the candidate pronunciation is a custom candidate pronunciation associated with the user. For example, the custom candidate pronunciation may be ranked higher than the canonical candidate pronunciation. This can be used to identify proper nouns with unique pronunciations that deviate from the canonical pronunciation. In some examples, the candidate pronunciations may be associated with one or more speech features, such as geographic origin, nationality, or ethnicity. For example, candidate pronunciationsMay be associated with the United states and the candidate pronunciationPossibly associated with the united kingdom. Further, the ranking of candidate pronunciations may be based on one or more characteristics of the user (e.g., geographic origin) stored in a user profile on the deviceNationality, ethnicity, etc.). For example, it may be determined from a user profile that the user is associated with the united states. Based on the user's association with the United states, candidate pronunciations can be assigned Ranking (associated with the United states) candidate pronunciationsHigher (associated with the united kingdom). In some examples, one of the ranked candidate pronunciations may be selected as the predicted pronunciation (e.g., the most likely pronunciation).

When a speech input is received, the STT processing module 730 may be used to determine a phoneme (e.g., using a sound model) corresponding to the speech input, and then attempt to determine a word (e.g., using a language model) that matches the phoneme. For example, if STT processing module 730 may first identify a phoneme sequence corresponding to a portion of the speech inputIt may then determine that the sequence corresponds to the word "tomato" based on the lexical index 744.

In some examples, STT processing module 730 may use fuzzy matching techniques to determine words in the speech input. Thus, for example, STT processing module 730 may determine a phoneme sequenceCorresponding to the word "tomato", even if the particular phoneme sequence is not a candidate phoneme sequence for the word.

The natural language processing module 732 ("natural language processor") of the digital assistant may take a sequence of words or symbols ("symbol sequence") generated by the STT processing module 730 and attempt to associate the symbol sequence with one or more "actionable intents" identified by the digital assistant. An "actionable intent" may represent a task that may be performed by a digital assistant and that may have an associated task flow implemented in task flow model 754. The associated task stream may be a series of programmed actions and steps taken by the digital assistant to perform the task. The capability scope of the digital assistant may depend on the number and variety of task flows that have been implemented and stored in task flow model 754, or in other words, on the number and variety of "actionable intents" that the digital assistant recognizes. However, the effectiveness of a digital assistant may also depend on the assistant's ability to infer the correct "executable intent or intents" from a user request expressed in natural language.

In some examples, natural language processing module 732 may receive context information associated with the user request (e.g., from I/O processing module 728) in addition to the sequence of words or symbols obtained from STT processing module 730. The natural language processing module 732 may optionally use the context information to clarify, supplement, and/or further qualify the information contained in the symbol sequence received from the STT processing module 730. The context information may include, for example: user preferences, hardware and/or software state of the user device, sensor information collected before, during, or shortly after the user request, previous interactions (e.g., conversations) between the digital assistant and the user, and the like. As described herein, contextual information may be dynamic and may vary with time, location, content of a conversation, and other factors.

In some examples, the natural language processing may be based on, for example, ontology 760. Ontology 760 may be a hierarchical structure containing a number of nodes, each node representing an "actionable intent" or "attribute" related to one or more of the "actionable intents" or other "attributes". As described above, an "actionable intent" may represent a task that a digital assistant is capable of performing, i.e., that task is "actionable" or can be performed. An "attribute" may represent a parameter associated with a sub-aspect of an executable intent or another attribute. The link between the actionable intent node and the property node in the ontology 760 may define how the parameters represented by the property node relate to the task represented by the actionable intent node.

In some examples, ontology 760 may be composed of actionable intent nodes and property nodes. Within ontology 760, each actionable intent node may be linked directly to one or more property nodes or through one or more intermediate property nodes. Similarly, each property node may be linked directly to one or more actionable intent nodes or through one or more intermediate property nodes. For example, as shown in FIG. 7C, ontology 760 can include a "restaurant reservation" node (i.e., an actionable intent node). The property nodes "restaurant," "date/time" (for reservation), and "party size" may each be directly linked to the actionable intent node (i.e., "restaurant reservation" node).

Further, the property nodes "cuisine," price interval, "" phone number, "and" location "may be child nodes of the property node" restaurant, "and each may be linked to a" restaurant reservation "node (i.e., an actionable intent node) through an intermediate property node" restaurant. As another example, as shown in fig. 7C, ontology 760 may also include a "set reminder" node (i.e., another actionable intent node). The property node "date/time" (for set reminders) and "topic" (for reminders) may each be linked to the "set reminders" node. Since the attribute "date/time" may be related to both the task of making a restaurant reservation and the task of setting a reminder, the attribute node "date/time" may be linked to both the "restaurant reservation" node and the "set reminder" node in ontology 760.

The executable intent node along with its linked concept nodes may be described as a "domain". In the present discussion, each domain may be associated with a respective executable intent and refer to a set of nodes (and relationships between those nodes) associated with a particular executable intent. For example, ontology 760 shown in FIG. 7C may include an example of a restaurant reservation field 762 and an example of a reminder field 764 within ontology 760. The restaurant reservation domain includes the actionable intent node "restaurant reservation," the attribute nodes "restaurant," date/time, "and" party size, "and the child attribute nodes" cuisine, "" price range, "" phone number, "and" location. The reminder field 764 may include the executable intent node "set reminder" and the property nodes "subject" and "date/time". In some examples, ontology 760 may be composed of multiple domains. Each domain may share one or more attribute nodes with one or more other domains. For example, in addition to the restaurant reservation field 762 and reminder field 764, a "date/time" property node may be associated with many different fields (e.g., a schedule field, a travel reservation field, a movie tickets field, etc.).

Although fig. 7C shows two example domains within ontology 760, other domains may include, for example, "find movie," "initiate phone call," "find direction," "arrange meeting," "send message," and "provide answer to question," "read list," "provide navigation instructions," "provide instructions for task," etc. The "send message" field may be associated with a "send message" actionable intent node, and may also include attribute nodes such as "one or more recipients," message type, "and" message body. The attribute node "recipient" may be further defined, for example, by child attribute nodes such as "recipient name" and "message address".

In some examples, ontology 760 may include all domains (and thus executable intents) that a digital assistant is able to understand and act upon. In some examples, ontology 760 may be modified, such as by adding or removing entire domains or nodes, or by modifying relationships between nodes within ontology 760.

In some examples, nodes associated with multiple related executables may be clustered under a "super domain" in ontology 760. For example, a "travel" super-domain may include a cluster of attribute nodes and actionable intent nodes related to travel. Executable intent nodes related to travel may include "flight booking," "hotel booking," "car rental," "routing," "finding points of interest," and so forth. An actionable intent node under the same super-domain (e.g., a "travel" super-domain) may have multiple attribute nodes in common. For example, executable intent nodes for "airline reservations," hotel reservations, "" car rentals, "" route plans, "and" find points of interest "may share one or more of the attribute nodes" starting location, "" destination, "" departure date/time, "" arrival date/time, "and" party size.

In some examples, each node in ontology 760 may be associated with a set of words and/or phrases that are related to the property or actionable intent represented by the node. The respective set of words and/or phrases associated with each node may be a so-called "vocabulary" associated with the node. The respective set of words and/or phrases associated with each node may be stored in the lexical index 744 associated with the property or actionable intent represented by the node. For example, returning to fig. 7B, the vocabulary associated with the node of the "restaurant" attribute may include words such as "food," "drinks," "cuisine," "hunger," "eating," "pizza," "fast food," "meal," and so forth. As another example, the words associated with the node of the actionable intent of "initiate a phone call" may include words and phrases such as "call," "make phone call," "dial," "make phone call with … …," "call the number," "call to," and so on. The vocabulary index 744 may optionally include words and phrases in different languages.

Natural language processing module 732 may receive a sequence of symbols (e.g., a text string) from STT processing module 730 and determine which nodes are involved in words in the sequence of symbols. In some examples, a word or phrase in the sequence of symbols may "trigger" or "activate" one or more nodes in ontology 760 if the word or phrase is found to be associated with those nodes (via lexical index 744). Based on the number and/or relative importance of the activated nodes, natural language processing module 732 may select one of the actionable intents as a task that the user intends for the digital assistant to perform. In some examples, the domain with the most "triggered" nodes may be selected. In some examples, the domain with the highest confidence value (e.g., based on the relative importance of its various triggered nodes) may be selected. In some examples, the domain may be selected based on a combination of the number and importance of triggered nodes. In some examples, additional factors are also considered in selecting a node, such as whether the digital assistant has previously correctly interpreted a similar request from the user.

The user data 748 may include user-specific information such as user-specific vocabulary, user preferences, user addresses, the user's default and second languages, the user's contact list, and other short-term or long-term information for each user. In some examples, natural language processing module 732 may use user-specific information to supplement information contained in the user input to further define the user intent. For example, for a user request to "invite my friends to my birthday party," natural language processing module 732 may be able to access user data 748 to determine which people the "friends" are and where and when the "birthday party" will be held without the user explicitly providing such information in their request.

Additional details of Searching for ontologies based on symbolic strings are described in U.S. utility patent application No.12/341,743 entitled "Method and Apparatus for Searching Using An Active Ontology," filed on 22.12.2008, the entire disclosure of which is incorporated herein by reference.

In some examples, once natural language processing module 732 identifies an executable intent (or domain) based on a user request, natural language processing module 732 may generate a structured query to represent the identified executable intent. In some examples, the structured query may include parameters for one or more nodes within the domain that may execute the intent, and at least some of the parameters are populated with specific information and requirements specified in the user request. For example, the user may say "help me reserve a seat at 7 pm in a sushi shop. In this case, the natural language processing module 732 may be able to correctly identify the actionable intent as "restaurant reservation" based on the user input. According to the ontology, a structured query for the "restaurant reservation" domain may include parameters such as { cuisine }, { time }, { date }, { party size }, and the like. In some examples, based on the speech input and text derived from the speech input using STT processing module 730, natural language processing module 732 may generate a partially structured query for the restaurant reservation field, where the partially structured query includes parameters { cuisine ═ sushi class "} and { time ═ 7 pm" }. However, in this example, the user's speech input contains insufficient information to complete the structured query associated with the domain. Thus, based on the currently available information, other necessary parameters, such as { party number } and { date }, may not be specified in the structured query. In some examples, natural language processing module 732 may populate some parameters of the structured query with the received contextual information. For example, in some examples, if the user requests a "nearby" sushi store, the natural language processing module 732 may populate the { location } parameter in the structured query with GPS coordinates from the user device.

In some examples, the natural language processing module 732 may communicate the generated structured query (including any completed parameters) to the task flow processing module 736 ("task flow processor"). Task stream processing module 736 may be configured to receive the structured query from natural language processing module 732, complete the structured query (if necessary), and perform the actions required to "complete" the user's final request. In some examples, various processes necessary to accomplish these tasks may be provided in task flow model 754. In some examples, task flow model 754 may include procedures for obtaining additional information from a user, as well as task flows for performing actions associated with the executable intent.

As described above, to complete a structured query, the task flow processing module 736 may need to initiate additional dialog with the user in order to obtain additional information and/or clarify potentially ambiguous speech input. When such interaction is necessary, task flow processing module 736 may invoke conversation flow processing module 734 to participate in a conversation with the user. In some examples, dialog flow processing module 734 may determine how (and/or when) to request additional information from the user, and receive and process the user response. The question may be provided to the user through the I/O processing module 728 and may receive an answer from the user. In some examples, dialog flow processing module 734 may present dialog output to a user via audio and/or visual output and receive input from the user via voice or physical (e.g., click) response. Continuing with the above example, when the task flow processing module 736 invokes the conversation flow processing module 734 to determine the "party size" and "date" information for the structured query associated with the domain "restaurant reservation," the conversation flow processing module 734 may generate a question such as "a few bits in a line? "and" which day to subscribe? "to be delivered to the user. Upon receiving an answer from the user, the dialog flow processing module 734 may populate the structured query with missing information or pass the information to the task flow processing module 736 to complete the missing information in accordance with the structured query.

Once the task flow processing module 736 has completed the structured query against the executable intent, the task flow processing module 736 may proceed to perform the final task associated with the executable intent. Thus, the task flow processing module 736 may perform the steps and instructions in the task flow model according to the specific parameters contained in the structured query. For example, a task flow model for the actionable intent "restaurant reservation" may include steps and instructions for contacting a restaurant and actually requesting a reservation for a particular party size at a particular time. For example, the following structured query is used: { restaurant reservation, restaurant-ABC cafe, date-2012 3-12 days, time-7 pm, party-number 5}, task flow processing module 736 may perform the following steps: (1) logging into a server or restaurant reservation system of an ABC cafe, such as(2) Inputting date, time and number of party people information in a certain form of the website; (3) submitting the form; and (4) making a calendar entry for the subscription in the user's calendar.

In some examples, the task flow processing module 736 may utilize the assistance of the service processing module 738 ("service processing module") to complete a task requested in the user input or to provide an informational answer requested in the user input. For example, the service processing module 738 may initiate phone calls, set calendar entries, invoke map searches, invoke or interact with other user applications installed on the user device, and invoke or interact with third-party services (e.g., restaurant reservation portals, social networking sites, bank portals, etc.) on behalf of the task flow processing module 736. In some examples, the protocols and Application Programming Interfaces (APIs) required for each service may be specified by respective ones of service models 756. The service handling module 738 may access the appropriate service model for the service and generate a request for the service according to the service model according to the protocols and APIs required by the service.

For example, if a restaurant has enabled an online reservation service, the restaurant may submit a service model that specifies the necessary parameters to make a reservation and an API to communicate the values of the necessary parameters to the online reservation service. The service processing module 738, when requested by the task flow processing module 736, can use the Web address stored in the service model to establish a network connection with the online booking service and send the necessary parameters for booking (e.g., time, date, party size) to the online booking interface in a format that conforms to the API of the online booking service.

In some examples, the natural language processing module 732, the dialog processing module 734, and the task flow processing module 736 may be used jointly and iteratively to infer and define the user's intent, obtain information, to further clarify and refine the user's intent, and ultimately generate a response (i.e., output to the user or complete a task) to satisfy the user's intent. The generated response may be a dialog response to the speech input that at least partially satisfies the user's intent. Further, in some examples, the generated response may be output as a speech output. In these examples, the generated response may be sent to a speech synthesis module 740 (e.g., a speech synthesizer) where it may be processed to synthesize the dialog response in speech form. In other examples, the generated response may be data content relevant to satisfying the user request in the voice input.

The speech synthesis module 740 may be configured to synthesize speech output for presentation to a user. The speech synthesis module 740 synthesizes speech output based on text provided by the digital assistant. For example, the generated dialog response may be in the form of a text string. The speech synthesis module 740 may convert the text string into audible speech output. Speech synthesis module 740 may use any suitable speech synthesis technique to generate speech output from text, including but not limited to: concatenative synthesis, unit-selective synthesis, diphone synthesis, domain-specific synthesis, formant synthesis, articulatory synthesis, Hidden Markov Model (HMM) based synthesis, and sine wave synthesis. In some examples, speech synthesis module 740 may be configured to synthesize individual words based on the phoneme strings corresponding to the words. For example, the phoneme string may be associated with a word in the generated dialog response. The phoneme string may be stored in metadata associated with the word. Speech synthesis model 740 may be configured to directly process the phone strings in the metadata to synthesize words in speech.

In some examples, speech synthesis may be performed on a remote device (e.g., server system 108) instead of (or in addition to) using speech synthesis module 740, and the synthesized speech may be sent to a user device for output to a user. This may occur, for example, in some implementations where the output of the digital assistant is generated at the server system. And because the server system typically has more processing power or more resources than the user device, it is possible to obtain a higher quality speech output than the client-side synthesis would achieve.

Additional details regarding digital assistants can be found in U.S. utility patent application No.12/987,982 entitled "Intelligent Automated Assistant" filed on 10.1.2011 and U.S. utility patent application No.13/251,088 entitled "Generating and Processing Task Items at repeat Tasks to performance" filed on 30.9.2011, the entire disclosures of which are incorporated herein by reference.

4. Exemplary functionality of digital Assistant-Intelligent search and object management

Fig. 8A-8F, 9A-9H, 10A-10B, 11A-11D, 12A-12D, and 13A-13C illustrate functions of performing tasks by a digital assistant using a search process or an object management process. In some examples, a digital assistant system (e.g., digital assistant system 700) is implemented by a user device according to various examples. In some examples, a user device, a server (e.g., server 108), or a combination thereof may implement a digital assistant system (e.g., digital assistant system 700). For example, the user device may be implemented using the device 104, 200, or 400. In some examples, the user device is a laptop computer, a desktop computer, or a tablet computer. The user device may operate in a multitasking environment, such as a desktop computer environment.

Referring to fig. 8A-8F, 9A-9H, 10A-10B, 11A-11D, 12A-12D, and 13A-13C, in some examples, a user device provides various user interfaces (e.g., user interfaces 810, 910, 1010, 1110, 1210, and 1310). The user device displays various user interfaces on a display (e.g., touch-sensitive display system 212, display 440) associated with the user device. The various user interfaces provide one or more affordances representing different processes (e.g., affordances 820, 920, 1020, 1120, 1220, and 1320 representing search processes and affordances 830, 930, 1030, 1130, 1230, and 1330 representing object management processes). One or more processes may be instantiated directly or indirectly by a user. For example, a user instantiates one or more processes by selecting an affordance using an input device such as a keyboard, mouse, joystick, finger, or the like. The user may also use voice input to instantiate one or more processes, as described in more detail below. Instantiating a procedure includes calling a procedure that has not yet been executed. If at least one instance of a process is executing, instantiating the process includes executing an existing instance of the process or generating a new instance of the process. For example, instantiating an object management process includes invoking the object management process, using an existing object management process, or generating a new instance of the object management process.

As shown in fig. 8A-8F, 9A-9H, 10A-10B, 11A-11D, 12A-12D, and 13A-13C, a user device displays affordances (e.g., affordances 840, 940, 1040, 1140, 1240, and 1340) on a user interface (e.g., user interfaces 810, 910, 1010, 1110, 1210, and 1310) to instantiate a digital assistant service. For example, the affordance may be a microphone icon representing a digital assistant. The affordance may be displayed anywhere on the user interface. For example, the affordance may be displayed in a task bar at the bottom of the user interface (e.g., task bars 808, 908, 1008, 1108, 1208, and 1308), a menu bar at the top of the user interface (e.g., menu bars 806, 906, 1006, 1106, 1206, and 1306), and a notification center at the right side of the user interface. The affordance may also be dynamically displayed on the user interface. For example, a user device displays an affordance near an application user interface (e.g., an application window) such that instantiation of a digital assistant service can be facilitated.

In some examples, the digital assistant is instantiated in response to receiving a predetermined phrase. For example, the digital assistant is invoked in response to receiving a phrase such as "hey, assistant," "wake, assistant," "listen, assistant," "determine, assistant," or the like. In some examples, a digital assistant is instantiated in response to receiving a selection of an affordance. For example, the user selects the affordance 840, 940, 1040, 1140, 1240, and/or 1340 using an input device such as a mouse, stylus, finger, or the like. Providing a digital assistant on a user device consumes computing resources (e.g., power, network bandwidth, memory, and processor cycles). In some examples, the digital assistant is paused or turned off until the user invokes the digital assistant. In some examples, the digital assistant is active for various periods of time. For example, the digital assistant can activate and monitor the user's voice input during the display of various user interfaces, the user device turning on, the user device entering a sleep or sleep state, the user logging off, or a combination thereof.

Referring to fig. 8A-8F, 9A-9H, 10A-10B, 11A-11D, 12A-12D, and 13A-13C, the digital assistant receives one or more voice inputs from the user, such as voice inputs 852, 854, 855, 856, 952, 954, 1052, 1054, 1152, 1252, or 1352. The user provides various speech inputs, for example for the purpose of performing tasks using a search process or an object management process. In some examples, the digital assistant receives voice input from the user directly at the user device or indirectly through another electronic device communicatively connected to the user device. The digital assistant receives voice input directly from the user, for example, through a microphone of the user device (e.g., microphone 213). User devices include devices configured to operate in a multitasking environment, such as laptop computers, desktop computers, tablets, servers, and the like. The digital assistant may also receive voice input indirectly through one or more electronic devices, such as a headset, a smartphone, a tablet, and so forth. For example, the user may speak into a headset (not shown). The headset receives speech input from the user and transmits the speech input or a representation thereof to a digital assistant of the user device via, for example, a bluetooth connection between the headset and the user device.

With reference to fig. 8A-8F, 9A-9H, 10A-10B, 11A-11D, 12A-12D, and 13A-13C, in some embodiments, a digital assistant (e.g., represented by affordances 840, 940, 1040, 1140, 1240, and 1340) identifies contextual information associated with a user device. The contextual information includes, for example, user-specific data, metadata associated with one or more objects, sensor data, and user device configuration data. An object may be a target or component of a process (e.g., an object management process) associated with performing a task or a graphical element currently displayed on a screen, and the object or graphical element may or may not currently have focus (e.g., currently selected). For example, the objects may include files (e.g., photos, documents), folders, communication applications (e.g., emails, messages, notifications, or voicemails), contacts, calendars, applications, online resources, and so forth. In some examples, the user-specific data includes log information, user preferences, a history of user interactions with the user device, and the like. The log information indicates the most recently used object (e.g., presentation file) in the process. In some examples, metadata associated with one or more objects includes a title of the object, time information of the object, an author of the object, a summary of the object, and so forth. In some examples, the sensor data includes various data collected by sensors associated with the user device. For example, the sensor data includes location data indicative of a physical location of the user device. In some examples, the user device configuration data includes a current device configuration. For example, the device configuration indicates that the user device is communicatively connected to one or more electronic devices, such as a smart phone, a tablet, and the like. As described in more detail below, the user device may perform one or more processes using the context information.

Referring to fig. 8A-8F, 9A-9H, 10A-10B, 11A-11D, 12A-12D, and 13A-13C, in response to receiving a speech input, the digital assistant determines a user intent based on the speech input. As described above, in some examples, the digital assistant processes the speech input through an I/O processing module (e.g., I/O processing module 728 shown in fig. 7B), an STT processing module (e.g., STT processing module 730 shown in fig. 7B), and a natural language processing module (e.g., natural language processing module 732 shown in fig. 7B). The I/O processing module forwards the speech input to the STT processing module (or speech recognizer) for speech-to-text conversion. The speech-to-text conversion generates text based on the speech input. As described above, the STT processing module generates a sequence of words or symbols ("symbol sequence"), and provides the symbol sequence to the natural language processing module. The natural language processing module performs natural language processing on the text and determines a user intention based on a result of the natural language processing. For example, the natural language processing module may attempt to associate a sequence of symbols with one or more executable intents identified by the digital assistant. As previously described, once the natural language processing module identifies an actionable intent based on user input, it generates a structured query to represent the identified actionable intent. The structured query includes one or more parameters associated with the actionable intent. The one or more parameters are used to facilitate performance of the task based on the actionable intent.

In some embodiments, the digital assistant further determines whether the user intends to perform a task using a search process or an object management process. The search process is configured to search data stored internally or externally to the user device. An object management process is configured to manage objects associated with a user device. Various examples of determining user intent are provided in more detail below with respect to fig. 8A-8F, 9A-9H, 10A-10B, 11A-11D, 12A-12D, and 13A-13C.

Referring to fig. 8A, in some examples, a user device receives speech input 852 from a user to instantiate a digital assistant. For example, speech input 852 includes "hey, assistant". In response to the voice input, the user device instantiates a digital assistant represented by affordance 840 or 841 such that the digital assistant actively monitors for subsequent voice input. In some examples, the digital assistant provides spoken output 872 indicating that it has instantiated. For example, spoken output 872 includes "start bar i am listening". In some examples, the user device receives a selection of either affordance 840 or affordance 841 from a user to instantiate a digital assistant. Selection of the affordance is performed by using an input device, such as a mouse, stylus, finger, etc.

Referring to fig. 8B, in some examples, the digital assistant receives a speech input 854. For example, voice input 854 includes "open search process and look up today's AAPL stock price," or simply "show me today's AAPL stock price". Based on the speech input 854, the digital assistant determines a user intent. For example, to determine the user intent, the digital assistant determines that the actionable intent is to obtain online information, and the one or more parameters associated with the actionable intent include "AAPL stock price" and "today".

As previously described, in some examples, the digital assistant further determines whether the user intends to perform a task using a search process or an object management process. In some embodiments, to make the determination, the digital assistant determines whether the speech input includes one or more keywords that represent a search process or an object management process. For example, the digital assistant determines that the speech input 854 includes a keyword or phrase, such as "open search process," indicating that the user intends to perform a task using the search process. Thus, the digital assistant determines that the user's intent is to perform a task using a search process.

As shown in fig. 8B, in accordance with a determination that the user intent is to perform a task using a search process, the digital assistant performs the task using the search process. As previously described, the natural language processing module of the digital assistant generates a structured query based on the user intent and passes the generated structured query to a task flow processing module (e.g., task flow processing module 736). The task flow processing module receives the structured query from the natural language processing module, completes the structured query if necessary, and performs the actions required to "complete" the user's final request. For example, using a search process to perform a task includes, for example, searching for at least one object. In some embodiments, the at least one object includes a folder, a file (e.g., photo, audio, video), a communication application (e.g., email, message, notification, voicemail), a contact, a calendar, an application (e.g., Keynote, Number, iTunes, Safari), an online information source (e.g., google, yahoo, penbo), or a combination thereof. In some examples, searching for an object is based on metadata associated with the object. For example, searching for a file or folder may use metadata such as a flag, date, time, author, title, file type, size, number of pages, and/or file location associated with the folder or file. In some examples, the files or folders are stored internally or externally to the user device. For example, the files or folders may be stored on a hard disk of the user device or on a cloud server. In some examples, the search for the communication application is based on metadata associated with the communication application. For example, searching for an email uses metadata such as the sender of the email, the recipient of the email, the date of transmission/reception of the email, and the like.

As shown in fig. 8B, the digital assistant performs the search in accordance with a determination that the user's intent is to use the search process to obtain AAPL stock prices. For example, the digital assistant instantiates a search process represented by affordance 820 and causes the search process to search for today's AAPL stock prices. In some examples, the digital assistant further causes the search process to display a user interface 822 (e.g., a snippet or window) that provides text corresponding to the speech input 854 (e.g., "open search process and find today's AAPL stock price").

Referring to fig. 8C, in some embodiments, the digital assistant provides a response based on the results of performing a task using a search process. As shown in fig. 8C, as a result of searching for AAPL stock prices, the digital assistant displays a user interface 824 (e.g., a snippet or window) that provides results of performing the task using the search process. In some embodiments, user interface 824 is located within user interface 822 as a separate user interface. In some embodiments, user interfaces 824 and 822 are integrated together as a single user interface. On the user interface 824, the search results for AAPL stock prices are displayed. In some embodiments, user interface 824 further provides affordances 831 and 833. Affordance 831 enables user interface 824 to be closed. For example, if the digital assistant receives a user selection affordance 831, the user interface 824 disappears or is turned off from the display of the user device. The affordance 833 enables movement or sharing of search results displayed on the user interface 824. For example, if the digital assistant receives the user selection affordance 833, it instantiates a process (e.g., an object management process) to move or share the user interface 824 (or search results thereof) with the notification application. As shown in fig. 8C, the digital assistant displays a user interface 826 associated with the notification application to provide search results for AAPL stock prices. In some embodiments, user interface 826 displays affordance 827. Affordance 827 enables scrolling within user interface 826 such that a user can view the entire content (e.g., multiple notifications) and/or indicate the relative position of the document with respect to its entire length and/or width within user interface 826. In some embodiments, the user interface 826 displays results and/or conversation histories stored by the digital assistant (e.g., search results obtained from current and/or past search processes). Further, in some examples, the results of performing the task are dynamically updated over time. For example, AAPL stock prices may be dynamically updated over time and displayed on user interface 826.

In some embodiments, the digital assistant also provides spoken output corresponding to the search results. For example, a digital assistant (e.g., a digital assistant represented by affordance 840) provides spoken output 874, including "today's AAPL price $ 100.00". In some examples, the user interface 822 includes text corresponding to the spoken output 874.

Referring to fig. 8D, in some examples, the digital assistant instantiates a process (e.g., an object management process) to move or share search results displayed on the user interface 824 in response to subsequent speech input. For example, the digital assistant receives a voice input 855 such as "copy AAPL stock prices to my notepad". In response, the digital assistant instantiates a process to move or copy the search results (e.g., AAPL stock prices) to the user's notepad. As shown in fig. 8D, in some examples, the digital assistant further displays a user interface 825 that provides search results that are copied or moved into the user notepad. In some examples, the digital assistant further provides a spoken output 875, such as "determine that AAPL stock price has been copied to your notepad. In some examples, user interface 822 includes text corresponding to spoken output 875.

Referring to fig. 8E, in some examples, the digital assistant determines that the user intends to perform a task using an object management process and then performs the task using the object management process. For example, the digital assistant receives voice input 856, such as "open the object management process and show me all photos from my colorado trip", or simply "show me all photos from my colorado trip". Based on the speech input 856 and the contextual information, the digital assistant determines the user intent. For example, the digital assistant determines that the actionable intent is to display a photograph and determines one or more parameters, such as "all" and "colorado travel. The digital assistant further uses the contextual information to determine which photos correspond to the user's colorado trip. As previously described, the contextual information includes user-specific data, metadata for one or more objects, sensor data, and/or device configuration data. For example, metadata associated with one or more files (e.g., file 1, file 2, and file 3 displayed in user interface 832) indicates that the file name includes the words "colorado" or a city name of colorado (e.g., "danver"). The metadata may also indicate that the folder name includes the words "colorado" or a city name of colorado (e.g., "denver"). As another example, the sensor data (e.g., GPS data) indicates that the user is traveling in colorado during a certain time period. Thus, any photograph taken by the user during that particular time period is a photograph taken by the user during the course of the Colorado trip. Likewise, the photograph itself may include geotag metadata that associates the photograph with the venue. For example, based on the context information, the digital assistant determines that the user's intent is to display photos stored in a folder with the folder name "colorado trip," or to display photos taken by the user during the time period when colorado has traveled.

As previously described, in some examples, the digital assistant determines whether the user intends to perform a task using a search process or an object management process. To make such a determination, the digital assistant will determine whether the speech input includes one or more keywords that represent a search process or an object management process. For example, the digital assistant determines that the voice input 856 includes a keyword or phrase, such as "open object management process," indicating that the user intends to perform a task using the object management process.

In accordance with a determination that the user intent is to perform a task using an object management process, the digital assistant performs the task using the object management process. For example, the digital assistant searches for at least one object using an object management process. In some examples, the at least one object includes at least one of a folder or a file. The file may include at least one of a photograph, audio (e.g., a song), or video (e.g., a movie). In some examples, searching for a file or folder is based on metadata associated with the folder or file. For example, searching for a file or folder uses metadata such as a flag, date, time, author, title, file type, size, number of pages, and/or file location associated with the folder or file. In some examples, the files or folders may be stored internally or externally to the user device. For example, the files or folders may be stored on a hard disk of the user device or on a cloud server.

As shown in fig. 8E, the digital assistant performs the task using the object management process, for example, in accordance with a determination that the user's intent is to display photographs stored in a folder with a folder name of "colorado trip," or to display photographs taken by the user during a time period of colorado trip. For example, the digital assistant instantiates the object management process represented by affordance 830 and causes the object management process to search for photos from the user's colorado trip. In some examples, the digital assistant also causes the object management process to display a segment or window (not shown) that provides text for the user's voice input 856.

Referring to fig. 8F, in some embodiments, the digital assistant further provides a response based on the results of performing the task using the object management process. As shown in fig. 8F, as a result of searching for the user's kororado travel photo, the digital assistant displays a user interface 834 (e.g., a snippet or window) that provides the results of performing the task using the object management process. For example, on the user interface 834, a preview of the photograph is displayed. In some examples, the digital assistant instantiates a process (e.g., an object management process) to perform additional tasks on the photo, such as inserting the photo into a document or attaching the photo to an email. As described in more detail below, the digital assistant can instantiate a process to perform additional tasks in response to additional speech input by the user. Likewise, the digital assistant may perform multiple tasks in response to a single voice input, such as "send photos of my colorado trip to my mom via email. The digital assistant can also instantiate a process to perform such additional tasks in response to input provided by the user using the input device (e.g., selecting one or more affordances via mouse input or performing a drag-and-drop operation). In some embodiments, the digital assistant further provides a spoken output corresponding to the result. For example, the digital assistant provides spoken output 876 including "these are photographs of your colorado's trip".

Referring to fig. 9A, in some examples, the user's voice input may not include one or more keywords indicating whether the user intends to use a search process or an object management process. For example, the user provides a voice input 952, such as "what are the warrior team's game scores today? The "voice input 952 does not include a keyword indicating" search process "or" object management process ". Thus, the digital assistant may not be able to use the keywords to determine whether the user intends to perform a task using a search process or an object management process.

In some embodiments, to determine whether the user intends to perform a task using a search process or an object management process, the digital assistant determines whether the task is associated with a search based on the speech input. In some examples, tasks associated with a search may be performed by a search process or an object management process. For example, both the search process and the object management process may search for folders and files. In some examples, the search process may further search various objects, including online information sources (e.g., websites), communications (e.g., emails), contacts, calendars, and so forth. In some examples, the object management process may not be configured to search for certain objects, such as online information sources.

In accordance with a determination that the task is associated with a search, the digital assistant further determines whether a search process is required to perform the task. As previously described, if a task is associated with a search, a search process or object management process may be used to perform the task. However, the object management process may not be configured to search for certain objects. Thus, to determine whether the user intends to use the search process or the object management process, the digital assistant further determines whether the task requires a search process. For example, as shown in FIG. 9A, based on the voice input 952, the digital assistant, for example, determines that the user's intent is to obtain a warrior team today's game score. Based on the user intent, the digital assistant further determines that performing the task requires searching the online information sources, and thus the task is associated with the search. The digital assistant further determines whether a search process is required to perform the task. As previously described, in some examples, the search process is configured to search online information sources, such as websites, while the object management process may not be configured to search such online information sources. Thus, the digital assistant determines that a search process is required to search the online information sources (e.g., search the warrior team website for a score).

Referring to fig. 9B, in some embodiments, in accordance with a determination that a search process is required to perform the task, the digital assistant performs the task using the search process. For example, in accordance with a determination that searching for a warrior team's today game score requires a search process, the digital assistant instantiates the search process represented by affordance 920 and causes the search process to search for a warrior team's today game score. In some examples, the digital assistant further causes the search process to display a user interface 922 (e.g., a snippet or window) in which text of the user speech input 952 is provided (e.g., "what is the warrior team's match score today. The user interface 922 includes one or more affordances 921 and 927. Similar to the above, affordance 921 (e.g., a close button) enables user interface 922 and affordance 927 (e.g., a scroll bar) to be closed such that scrolling within user interface 922 is enabled such that a user can view the entire content within user interface 922.

Referring to fig. 9B, in some examples, the digital assistant further provides one or more responses based on the search results. As shown in fig. 9B, as a result of searching the warrior team's today's game scores, the digital assistant displays a user interface 924 (e.g., a snippet or window) that provides results of performing a task using the search process. In some embodiments, the user interface 924 is located within the user interface 922 as a separate user interface. In some embodiments, user interfaces 924 and 922 are integrated together as a single user interface. In some examples, the digital assistant displays a user interface 924 that provides current search results (e.g., warrior team competition scores) and another user interface (e.g., user interface 824 shown in fig. 8C) that provides previous search results (e.g., AAPL stock prices). In some embodiments, the digital assistant only displays the user interface 924 that provides the current search results and does not display another user interface that provides previous search results. As shown in fig. 9B, the digital assistant only displays the user interface 924 to provide the current search results (e.g., warrior team game scores). In some examples, the affordance 927 (e.g., a scroll bar) enables scrolling within the user interface 922 so that the user can view previous search results. Further, in some examples, previous search results (e.g., stock prices, sports scores, weather forecasts, etc.) are dynamically updated or refreshed over time.

As shown in FIG. 9B, on the user interface 924, the search results for the warrior team's game score today (e.g., warrior team 104-89 team of knight) are displayed. In some embodiments, user interface 924 further provides affordances 923 and 925. The affordance 923 enables closing of the user interface 924. For example, if the digital assistant receives a user selection of affordance 923, user interface 924 disappears or shuts down from the display of the user device. The affordance 925 enables moving or sharing of search results displayed on the user interface 924. For example, if the digital assistant receives a user selection affordance 925, it may move or share the user interface 924 (or its search results) with the notification application. As shown in fig. 9B, the digital assistant displays a user interface 926 associated with the notification application to provide search results for the scores of the warrior team competition. As previously described, the results of performing the task are dynamically updated over time. For example, the warrior team game score may be dynamically updated over time while the game continues to progress and is displayed on the user interface 924 (e.g., a segment or window) and/or the user interface 926 (e.g., a notification application user interface). In some embodiments, the digital assistant further provides spoken output corresponding to the search results. For example, a digital assistant represented by affordances 940 or 941 provides a spoken output 972, such as "the warrior team defeats the knight team, 104-89. "in some examples, the user interface 922 (e.g., a snippet or window) provides text corresponding to the spoken output 972.

As described above, in some embodiments, the digital assistant determines whether a task is associated with a search, and based on such a determination, the digital assistant determines whether a search process is required to perform the task. Referring to fig. 9C, in some embodiments, the digital assistant determines that a search process is not required to perform the task. For example, as shown in fig. 9C, the digital assistant receives a voice input 954, such as "show me all files named' cost. Based on the voice input 954 and the context information, the digital assistant determines that the user's intent is to display all files containing the word "cost" (or a portion, variation, paraphrase thereof) in filenames, metadata, file content, and the like. The digital assistant determines, based on the user's intent, that the task to be performed includes searching all files associated with the word "cost". Thus, the digital assistant determines that performing the task is associated with the search. As described above, in some examples, both the search process and the object management process may perform searching for files. Thus, the digital assistant determines that a search process is not required to perform the task of searching all documents associated with the word "cost".

Referring to fig. 9D, in some examples, in accordance with a determination that a search process is not required to perform the task, the digital assistant determines whether to perform the task using the search process or the object management process based on a predetermined configuration. For example, if both the search process and the object management process can perform tasks, the predetermined configuration may indicate that the tasks are to be performed using the search process. The predetermined configuration may be generated and updated using contextual information such as user preferences or user-specific data. For example, the digital assistant determines that the search process was previously selected more frequently than the object management process when performing a file search for a particular user. Thus, the digital assistant generates or updates a predetermined configuration to indicate that the search process is the default process for searching for files. In some examples, the digital assistant generates or updates a predetermined configuration to indicate that the object management process is a default process.

As shown in fig. 9D, based on the predetermined configuration, the digital assistant determines that the task of searching all documents associated with the word "fee" is to be performed using a search process. Thus, the digital assistant performs a search of all documents associated with the word "fee" using a search process. For example, the digital assistant instantiates a search process represented by the affordance 920 displayed on the user interface 910 and causes the search process to search all files associated with the word "cost". In some examples, the digital assistant further provides a spoken output 974 informing the user that a task is being performed. Spoken output 974 includes information such as "determine, search all files named 'cost'. In some examples, the digital assistant further causes the search process to display a user interface 928 (e.g., a snippet or window) that provides text corresponding to the speech input 954 and the spoken output 974.

Referring to fig. 9E, in some embodiments, the digital assistant further provides one or more responses based on results of performing the task using the search process. As shown in fig. 9E, as a result of searching all documents associated with the word "fee," the digital assistant displays a user interface 947 (e.g., a snippet or window) that provides the search results. In some embodiments, the user interface 947 is located within the user interface 928 as a separate user interface. In some embodiments, the user interfaces 947 and 928 are integrated together as a single user interface. On the user interface 947, a list of files associated with the word "fee" is displayed. In some embodiments, the digital assistant further provides spoken output corresponding to the search results. For example, a digital assistant represented by affordance 940 or 941 provides spoken output 976, such as "this is all files named 'cost'. In some examples, the digital assistant further provides text on the user interface 928 that corresponds to the spoken output 976.

In some embodiments, the digital assistant provides one or more links associated with results of performing a task using a search process. The links enable instantiation of processes (e.g., opening a file, invoking an object management process) using the search results. As shown in fig. 9E, on the user interface 947, a list of files represented by file names (e.g., expense file 1, expense file 2, expense file 3) can be associated with the link. For example, a link is displayed on one side of each file name. As another example, the filename is displayed in a particular color (e.g., blue) indicating that the filename is associated with a link. In some examples, the filename associated with the link is displayed in the same color as the other items displayed on the user interface 947.

As previously described, the links enable the process to be instantiated using the search results. Instantiating a procedure includes calling a procedure that has not yet been run. If at least one instance of a process is running, instantiating the process includes executing an existing instance of the process or generating a new instance of the process. For example, instantiating an object management process includes invoking the object management process, using an existing object management process, or generating a new instance of the object management process. As shown in fig. 9E and 9F, the link displayed on the user interface 947 enables management of an object (e.g., a file) associated with the link. For example, user interface 947 receives a user selection (e.g., selection via cursor 934) of a link associated with a file (e.g., "cost file 3"). In response, the digital assistant instantiates the object management process represented by the affordance 930 to enable management of the file. As shown in fig. 9F, the digital assistant displays a user interface 936 (e.g., a snippet or window) that provides a folder that contains files associated with the link (e.g., "cost file 3"). Using the user interface 936, the digital assistant instantiates an object management process to perform one or more additional tasks (e.g., copy, edit, view, move, compress, etc.) with respect to the file.

Referring again to fig. 9E, in some examples, the links displayed on the user interface 947 enable direct viewing and/or editing of the object. For example, the digital assistant receives a selection (e.g., selection via cursor 934) of a link associated with a file (e.g., "cost file 3") via user interface 947. In response, the digital assistant instantiates a process (e.g., a document viewing/editing process) to view and/or edit the file. In some examples, the digital assistant instantiates the process to view and/or edit the file without instantiating the object management process. For example, the digital assistant directly instantiates a Number process or an Excel process to view and/or edit the expense file 3.

Referring to fig. 9E and 9G, in some examples, the digital assistant instantiates a process (e.g., a search process) to refine search results. As shown in fig. 9E and 9G, the user may wish to refine the search results displayed on the user interface 947. For example, a user may desire to select one or more files from the search results. In some examples, the digital assistant receives a voice input 977 from the user, such as "only Kevin is sent to my and i marked as a draft file. Based on the voice input 977 and the contextual information, the digital assistant determines that the user intends to display only the expense files sent by Kevin and associated with the draft token. Based on the user intent, the digital assistant instantiates a process (e.g., a search process) to refine the search results. For example, as shown in fig. 9G, based on the search results, the digital assistant determines that expense file 1 and expense file 2 were Kevin sent to the user and are tagged. Thus, the digital assistant continues to display both files on the user interface 947 and removes the expense file 3 from the user interface 947. In some examples, the digital assistant provides spoken output 978, such as "these are files that Kevin sends to you and you mark as draft. The "digital assistant can further provide text on the user interface 928 that corresponds to the spoken output 978.

Referring to fig. 9H, in some examples, the digital assistant instantiates a process (e.g., an object management process) to perform an object management task (e.g., copy, move, share, etc.). For example, as shown in FIG. 9H, the digital assistant receives a voice input 984 from the user, such as "move expense File 1 to 'document' folder". Based on the voice input 984 and the context information, the digital assistant determines that the user's intent is to copy or move the expense file 1 from its current folder to the "documents" folder. According to the user's intent, the digital assistant instantiates a process (e.g., an object management process) to copy or move the expense file 1 from its current folder to the ' documents ' folder. In some examples, the digital assistant provides spoken output 982, such as "determine, move expense file 1 to 'document' folder. In some examples, the digital assistant further provides text corresponding to spoken output 982 on user interface 928.

As previously described, in some examples, the user's voice input may not include a keyword indicating whether the user intends to perform a task using a search process or an object management process. Referring to fig. 10A-10B, in some embodiments, the digital assistant determines that a search process is not required to perform the task. Based on the determination, the digital assistant provides a spoken output that requires the user to select a search process or an object management process. For example, as shown in fig. 10A, the digital assistant receives a voice input 1052 from the user, such as "display all files named 'cost' to me". Based on the speech input 1052 and the context information, the digital assistant determines that the user's intent is to display all files associated with the word "cost". The digital assistant further determines that the task can be performed by a search process or an object management process according to the user's intention, and thus does not need a search process. In some examples, the digital assistant provides spoken output 1072, such as "do you want to perform a search using a search process or an object management process? ". In some examples, the digital assistant receives a voice input 1054 from a user, such as an "object management process. The voice input 1054 thus indicates that the user's intent is to perform a task using an object management process. For example, in accordance with this selection, the digital assistant instantiates the object management process represented by the affordance 1030 to search all files associated with the word "cost". As shown in fig. 10B, similar to those described above, as a result of the search, the digital assistant displays a user interface 1032 (e.g., a snippet or window) that provides a folder that contains files associated with the word "fee". Similar to those described above, using the user interface 1032, the digital assistant instantiates an object management process to perform one or more additional tasks (e.g., copy, edit, view, move, compress, etc.) with respect to the file.

Referring to fig. 11A and 11B, in some embodiments, the digital assistant identifies contextual information and determines a user intent based on the contextual information and the user's speech input. As shown in fig. 11A, a digital assistant represented by affordance 1140 or 1141 receives a voice input 1152, such as "open my Keynote presentation file created last night". In response to receiving the voice input 1152, the digital assistant identifies contextual information, such as a history of user interactions with the user device, metadata associated with files recently processed by the user, and so forth. For example, the digital assistant identifies metadata such as the date, time, and type of file that the user processed from 6 pm to 2 am yesterday. Based on the identified contextual information and the voice input 1152, the digital assistant determines that the user intent includes searching a Keynote presentation file associated with metadata indicating that the file was edited between about 6 pm to 12 pm yesterday; and instantiates a process (e.g., a Keynote process) to open the presentation file.

In some examples, the context information includes an application name or Identifier (ID). For example, the user's voice input provides "open a Keynote presentation file," find My Pages document, "or" find My HotNewApp document. The context information includes an application name (e.g., Keynote, Pages, HotNewApp) or an application ID. In some examples, the context information is dynamically updated or synchronized. For example, the context information is updated in real-time after the user installs a new application named HotNewApp. In some examples, the digital assistant identifies dynamically updated contextual information and determines user intent. For example, the digital assistant recognizes the application names Keynote, Pages, HotNewApp, or their IDs and determines the user intent from the application name/ID and the speech input.

Based on the user intent, the digital assistant further determines whether the user intent is to perform a task using a search process or an object management process. As previously described, the digital assistant makes such a determination based on one or more keywords included in the speech input, based on whether the task requires a search process, based on a predetermined configuration, and/or based on a user's selection. As shown in fig. 11A, the voice input 1152 does not include a keyword indicating whether the user intends to use a search process or an object management process. Thus, the digital assistant determines that the user intent is to use the object management process based on the predetermined configuration. Upon determining, the digital assistant instantiates an object management process to search a Keynote presentation file associated with metadata indicating that the file was edited between about 6 PM to 12 PM yesterday. In some embodiments, the digital assistant further provides spoken output 1172, such as "determine that you are looking for a Keynote presentation file created last night".

In some embodiments, context information is used to perform a task. For example, the application name and/or ID may be used to form a query to search for applications and/or objects (e.g., files) associated with the application name/ID. In some examples, a server (e.g., server 108) forms a query using an application name (e.g., Keynote, Pages, HotNewApp) and/or an ID and sends the query to a digital assistant of the user device. Based on the query, the digital assistant instantiates a search process or object management process to search for one or more applications and/or objects. In some examples, the digital assistant searches only for objects (e.g., files) that correspond to the application name/ID. For example, if the query includes the application name "Pages," the digital assistant searches only the Pages file and does not search for other files (e.g., Word files) that may be opened by the Pages application. In some examples, the digital assistant searches for all objects associated with the application name/ID in the query.

Referring to fig. 11B and 11C, in some embodiments, the digital assistant provides one or more responses according to a confidence level associated with the results of performing the task. Inaccuracies may exist or arise during the determination of user intent, the determination of whether the user intent is to perform a task using a search process or an object management process, and/or the performance of a task. In some examples, the digital assistant determines a confidence level, such confidence level representing an accuracy of determining the user intent based on the speech input and the contextual information, an accuracy of determining whether the user intent is to perform a task using a search process or an object management process, an accuracy of performing a task using a search process or an object management process, or a combination thereof.

Continuing with the above example shown in fig. 11A, based on a voice input 1152 (such as "open a Keynote presentation file i created yesternight"), the digital assistant instantiates an object management process to perform a search of the Keynote presentation file associated with metadata indicating that the file was edited between yesterday approximately 6 pm to 12 pm. The search results may include a single file that exactly matches the search criteria. That is, the single file is a presentation file that was edited between about 6 pm to 12 pm yesterday. Thus, the digital assistant determines that the accuracy of the search is high, and thus determines that the confidence level is high. As another example, a search result may include multiple files that partially match the search criteria. For example, no file is a presentation file that was edited between about 6 pm to 12 pm yesterday, or multiple files are presentation files that were edited between about 6 pm to 12 pm yesterday. Thus, the digital assistant determines that the accuracy of the search is medium or low, and thus determines that the confidence level is medium or low.

As shown in fig. 11B and 11C, the digital assistant provides a response according to the determined confidence level. In some examples, the digital assistant determines whether the confidence level is greater than or equal to a threshold confidence level. In accordance with a determination that the confidence level is greater than or equal to the threshold confidence level, the digital assistant provides a first response. In accordance with a determination that the confidence level is less than the threshold confidence level, the digital assistant provides a second response. In some examples, the second response is different from the first response. As shown in fig. 11B, if the digital assistant determines that the confidence level is greater than or equal to the threshold confidence level, the digital assistant instantiates a process (e.g., a Keynote process represented by the user interface 1142) to enable viewing and editing of the file. In some examples, the digital assistant provides spoken output (such as "this is a presentation file you created last night"), and the text of the spoken output is displayed in the user interface 1143. As shown in fig. 11C, if the digital assistant determines that the confidence level is less than the threshold confidence level, the digital assistant displays a user interface 1122 (e.g., a snippet or window) that provides a list of candidate files. Each candidate file may partially satisfy the search condition. In some embodiments, the confidence level may be predetermined and/or dynamically updated based on user preferences, historical accuracy, and the like. In some examples, the digital assistant further provides spoken output 1174 (such as "this is all presentation files created last night"), and displays text corresponding to spoken output 1174 on user interface 1122.

Referring to fig. 11D, in some embodiments, the digital assistant instantiates a process (e.g., a Keynote presentation process) to perform additional tasks. Continuing with the above example, as shown in FIGS. 11B and 11D, the user may wish to display the presentation file in full screen mode. The digital assistant receives a voice input from the user 1154, such as "set to full screen". Based on the speech input 1154 and the context information, the digital assistant determines that the user intends to display the presentation file in full screen mode. The digital assistant causes the Keynote presentation process to display the slides in full screen mode, according to the user's intent. In some examples, the digital assistant provides spoken output 1176, such as "determine that your presentation file is being displayed in full screen mode. "

Referring to fig. 12A-12C, in some embodiments, the digital assistant determines that the user's intent is to perform multiple tasks based on a single voice input or speech. The digital assistant further instantiates one or more processes to perform a plurality of tasks according to the user intent. For example, as shown in FIG. 12A, a digital assistant represented by affordances 1240 or 1241 receives a single speech input 1252, such as "show me all photos from my Colorado travel and then send them to my mother". Based on the speech input 1252 and the contextual information, the digital assistant determines that the user intends to perform the first task and the second task. Similar to those described above, the first task is to display photographs stored in a folder named "colorado trip" or to display photographs taken by the user during the time period of colorado travel. With respect to the second task, the context information may indicate that a particular email address stored in the user's contacts is marked as the user's mother. Thus, the second task is to send an email containing a photograph associated with a Colorado trip to a specific email address.

In some examples, with respect to each task, the digital assistant determines whether the user intends to perform the task using a search process or an object management process. For example, the digital assistant determines that a first task is associated with a search and that the user intent is to perform the first task using an object management process. As shown in fig. 12B, in accordance with a determination that the user's intent is to perform a first task using an object management process, the digital assistant instantiates an object management process to search for photos associated with the user's colorado trip. In some examples, the digital assistant displays a user interface 1232 (e.g., a snippet or window) that provides folders that include search results (e.g., photos 1, 2, and 3). As another example, the digital assistant determines that the first task is associated with a search and that the user intent is to perform the first task using a search process. As shown in fig. 12C, in accordance with a determination that the user's intent is to perform the first task using a search process, the digital assistant instantiates a search process to search for photos associated with the user's colorado trip. In some examples, the digital assistant displays a user interface 1234 (e.g., a snippet or window) that provides the photos and/or links associated with the search results (e.g., photos 1, 2, and 3).

As another example, the digital assistant determines that the second task (e.g., sending an email containing a photograph associated with a colorado trip to a particular email address) is not associated with searching or managing the object. Based on the determination, the digital assistant determines whether the task can be performed using procedures available to the user device. For example, the digital assistant determines that the second task may be performed at the user device using an email process. In accordance with the determination, the digital assistant instantiates the process to perform the second task. As shown in fig. 12B and 12C, the digital assistant instantiates an email process and displays user interfaces 1242 and 1244 associated with the email process. The email process appends a photograph associated with the user's colorado trip to the email message. As shown in fig. 12B and 12C, in some embodiments, the digital assistant further provides spoken output 1272 and 1274, such as "these are photographs from your colorado trip. Do i prepare to send a photo to your mother, continue? "in some examples, the digital assistant displays text corresponding to spoken output 1274 on user interface 1244. In response to spoken output 1272 and 1274, the user provides a speech input, such as "good". Upon receiving the voice input from the user, the digital assistant causes the email process to send an email message.

Techniques for performing multiple tasks based on multiple commands contained in a single voice input or utterance may be found in related patent applications, such as: U.S. patent application No.14/724,623 entitled "MULTI-COMMAND SINGLE ended INPUT METHOD" filed on 28.5.2015, which claims benefit of priority from the following patents: U.S. provisional patent application No.62/005,556 entitled "MULTI-COMMAND SINGLE UTTERANCE INPUT METHOD" filed on 30.5.2014 and U.S. provisional patent application No.62/129,851 entitled "MULTI-COMMAND SINGLE UTTERANCE INPUT METHOD" filed on 8.3.2015. The contents of each of which are hereby incorporated by reference in their entirety.

As shown in fig. 12C and 12D, in some examples, the digital assistant causes the process to perform additional tasks based on additional speech input by the user. For example, the user may wish to send some photos, rather than all photos, in view of the search results displayed in the user interface 1234. The user provides a voice input 1254, such as "send only photo 1 and photo 2". In some examples, after the user selects the affordance 1235 (e.g., a microphone icon displayed on the user interface 1234), the digital assistant receives the voice input 1254. Based on the voice input 1254 and the contextual information, the digital assistant determines that the user's intent is to send an email with only the attached photo 1 and photo 2. The digital assistant causes the email process to remove the photograph 3 from the email message, according to the user's intent. In some examples, the digital assistant provides spoken output 1276 (such as "good, append photos 1 and 2 to email"), and displays text corresponding to spoken output 1276 on user interface 1234.

Referring to fig. 13A, in some embodiments, in accordance with a determination that the task is not associated with a search, the digital assistant determines whether the task is associated with managing at least one object. As shown in fig. 13A, for example, the digital assistant receives a voice input 1352, such as "create a new folder named 'item' on the desktop". Based on the voice input 1352 and the context information, the digital assistant determines that the user's intent is to generate a new folder on the desktop with a folder name of "project". The digital assistant further determines that the user intent is not associated with the search, but is associated with a management object (e.g., a folder). Thus, the digital assistant determines that the user intends to perform a task using the object management process.

In some examples, the digital assistant performs the task using the object management process in accordance with a determination that the user intent is to perform the task using the object management process. Performing a task using an object management process may include, for example, creating at least one object (e.g., creating a folder or file), storing at least one object (e.g., storing a folder, file, or communication), and compressing at least one object (e.g., compressing a folder and a file). Performing a task using an object management process may further include, for example, copying or moving at least one object from a first physical or virtual storage device to a second physical or virtual storage device. For example, a digital assistant instantiates an object management process that cuts and pastes files from a user device to a flash drive or cloud drive.

Performing the task using the object management process may further include, for example, deleting at least one object stored in the physical or virtual storage (e.g., deleting a folder or file) and/or restoring at least one object stored in the physical or virtual storage (e.g., restoring a deleted folder or deleted file). Performing the task using the object management process may further include, for example, tagging at least one object. In some examples, the marking of the object may be visible or invisible. For example, the digital assistant may cause the object management process to generate a "like" flag, tag email, tag file, etc. for a post on social media. The indicia may be made visible by displaying, for example, a mark, logo, or the like. Tagging may also be performed with respect to metadata of the object, causing a change in the stored (e.g., memory) content of the metadata. The metadata may or may not be visible.

Performing the task using the object management process may further include backing up the at least one object, for example, according to a predetermined time period for the backing up or a request of a user. For example, the digital assistant may cause the object management process to instantiate a backup program (e.g., a time machine program) to backup folders and files. Backups may be performed automatically according to a predetermined schedule (e.g., once a day, once a week, once a month, etc.) or according to a user request.

Performing the task using the object management process may also include, for example, sharing the at least one object between one or more electronic devices communicatively connected to the user device. For example, the digital assistant may cause the object management process to share a photo stored on the user device with another electronic device (e.g., the user's smartphone or tablet).

As shown in fig. 13B, in accordance with a determination that the user intent is to perform a task using an object management process, the digital assistant performs the task using the object management process. For example, the digital assistant instantiates an object management process to generate a folder named "project" on the desktop of the user interface 1310. In some examples, the digital assistant may cause the object management process to further open the folder, either automatically or in response to additional user input. For example, a digital assistant provides spoken output 1372, such as "good, i have created a folder named 'project' on the desktop, do you want to open it? "the user provides a voice input 1374, such as" yes ". In response to the user's voice input 1374, the digital assistant causes the object management process to open the "items" folder and display the user interface 1332 corresponding to the "items" folder.

Referring to FIG. 13C, in some embodiments, the digital assistant provides one or more affordances that enable a user to manipulate results of performing tasks using a search process or an object management process. The one or more affordances include, for example, an edit button, a cancel button, a redo button, an undo button, and so forth. For example, as shown in FIG. 13C, after generating a folder named "project" on the desktop, the digital assistant provides a user interface 1334 that displays an edit button 1336A, an undo button 1336B, and a redo button 1336C. In some examples, edit button 1336A enables a user to edit one or more aspects of an object (e.g., edit the name of a "project" folder); an undo button 1336B enables the user to reverse the last task performed by the object management process (e.g., delete the "item" folder); and a redo button 1336C that enables the user to repeat the last task performed by the object management process (e.g., create another folder using the object management process). It should be appreciated that the digital assistant may provide any desired affordance to enable a user to perform any manipulation of the results of performing a task using a search process or an object management process.

As previously described, the digital assistant can determine whether the user intends to perform a task using a search process or an object management process. In some examples, the digital assistant determines that the user intent is not associated with a search process or an object management process. For example, the user provides a voice input, such as "begin dictation. The digital assistant determines that the spoken task is not associated with a search. In some examples, in accordance with a determination that the task is not associated with the search, the digital assistant further determines whether the task is associated with managing the at least one object. For example, the digital assistant determines that the dictation task is also not associated with a management object, such as copying, moving, or deleting files, folders, or emails. In some examples, in accordance with a determination that the task is not associated with the management object, the digital assistant determines whether the task can be performed using a process available to the user device. For example, the digital assistant determines that the dictation task can be performed using a dictation process available to the user device. In some examples, the digital assistant initiates a conversation with the user with respect to performing tasks using processes available to the user device. For example, a digital assistant provides spoken output such as "good, begin to dictate" or "do you want to dictate in this presentation file that you are currently processing? "after providing the spoken output, the digital assistant receives a response from the user, e.g., confirming that the user's intent is dictated in the presentation file that the user is currently processing.

5. Exemplary functionality of digital Assistant-continuity

14A-14D, 15A-15D, 16A-16C, and 17A-17E illustrate functionality for performing a task at a user device or a first electronic device using remotely located content through a digital assistant. In some examples, a digital assistant system (e.g., digital assistant system 700) is implemented by a user device (e.g., devices 1400, 1500, 1600, and 1700) according to various examples. In some examples, a user device, a server (e.g., server 108), or a combination thereof may implement a digital assistant system (e.g., digital assistant system 700). For example, the user device may be implemented using the device 104, 200, or 400. In some examples, the user device may be a laptop computer, a desktop computer, or a tablet computer. The user device may operate in a multitasking environment, such as a desktop computer environment.

Referring to fig. 14A-14D, 15A-15D, 16A-16C, and 17A-17E, in some examples, a user device (e.g., devices 1400, 1500, 1600, and 1700) provides various user interfaces (e.g., user interfaces 1410, 1510, 1610, and 1710). Similar to those described above, the user device displays various user interfaces on the display, and the various user interfaces enable a user to instantiate one or more processes (e.g., movie processes, photo processing, Web browsing processes).

As shown in fig. 14A-14D, 15A-15D, 16A-16C, and 17A-17E, similar to those described above, a user device (e.g., devices 1400, 1500, 1600, and 1700) displays affordances (e.g., affordances 1440, 1540, 1640, and 1740) on a user interface (e.g., user interfaces 1410, 1510, 1610, and 1710) to instantiate a digital assistant service. Similar to those described above, in some examples, the digital assistant is instantiated in response to receiving a predetermined phrase. In some examples, a digital assistant is instantiated in response to receiving a selection of an affordance.

Referring to fig. 14A-14D, 15A-15D, 16A-16C, and 17A-17E, in some embodiments, the digital assistant receives one or more speech inputs from a user, such as speech inputs 1452, 1454, 1456, 1458, 1552, 1554, 1556, 1652, 1654, 1656, 1752, and 1756. A user can provide various speech inputs for the purpose of performing tasks using remotely located content, e.g., at a user device (e.g., devices 1400, 1500, 1600, and 1700) or at a first electronic device (e.g., electronic devices 1420, 1520, 1530, 1522, 1532, 1620, 1622, 1630, 1720, and 1730). Similar to those described above, in some examples, the digital assistant can receive voice input directly from the user at the user device or indirectly through another electronic device communicatively connected to the user device.

Referring to fig. 14A-14D, 15A-15D, 16A-16C, and 17A-17E, in some embodiments, a digital assistant identifies contextual information associated with a user device. The context information includes, for example, user-specific data, sensor data, and user device configuration data. In some examples, the user-specific data includes log information indicating user preferences, a history of user interactions with user devices (e.g., devices 1400, 1500, 1600, and 1700) and/or electronic devices communicatively connected to the user devices, and/or the like. For example, the user-specific data indicates that the user recently taken a self-portrait photograph using the electronic device 1420 (e.g., a smartphone); users have recently accessed podcasts, webcasts, movies, songs, audiobooks, and the like. In some examples, the sensor data includes various data collected by sensors associated with the user device or other electronic devices. For example, the sensor data includes GPS location data that indicates the physical location of the user device or an electronic device communicatively connected to the user device at any point in time or during any period of time. For example, the sensor data indicates that a photograph stored in the electronic device 1420 was taken in hawaii. In some examples, the user device configuration data includes a current or historical device configuration. For example, the user device configuration data indicates that the user device is currently communicatively connected to some electronic devices, but disconnected from other electronic devices. Electronic devices include, for example, smart phones, set-top boxes, tablets, and the like. As described in more detail below, the contextual information may be used to determine user intent and/or perform one or more tasks.

Referring to fig. 14A-14D, 15A-15D, 16A-16C, and 17A-17E, similar to those described above, in response to receiving a speech input, the digital assistant determines a user intent based on the speech input. The digital assistant determines the user intent based on the results of the natural language processing. For example, the digital assistant identifies an executable intent based on the user input and generates a structured query to represent the identified executable intent. The structured query includes one or more parameters associated with the actionable intent. The one or more parameters may be used to facilitate performance of a task based on the actionable intent. For example, based on a voice input such as "show self-photo i just taken," the digital assistant determines that the actionable intent is to display a photo, and the parameters include self-photos that the user has recently taken during the past few days. In some implementations, the digital assistant further determines the user intent based on the speech input and the contextual information. For example, the context information indicates that the user device is communicatively connected to the user's phone using a bluetooth connection and that a selfie picture was added to the user's phone two days ago. Thus, the digital assistant determines that the user's intent is to display a photograph, which is a self-photograph that was added to the user's phone two days ago. Determining user intent based on speech input and contextual information is described in more detail in various examples below.

In some embodiments, the digital assistant further determines whether the task is performed at the user device or at a first electronic device communicatively connected to the user device, in accordance with the user intent. Various embodiments of the determination are provided in more detail below with respect to fig. 14A-14D, 15A-15D, 16A-16C, and 17A-17E.

Referring to fig. 14A, in some examples, a user device 1400 receives a voice input 1452 from a user to invoke a digital assistant. As shown in fig. 14A, in some examples, the digital assistant is represented by an affordance 1440 or 1441 displayed on the user interface 1410. The speech input 1452 includes, for example, "hey, assistant. In response to the voice input 1452, the user device 1400 invokes the digital assistant, causing the digital assistant to actively monitor for subsequent voice inputs. In some examples, the digital assistant provides a spoken output 1472 indicating that it has been invoked. For example, spoken output 1472 includes "start bar, i am listening". As shown in fig. 14A, in some examples, user device 1400 is communicatively connected to one or more electronic devices (such as electronic device 1420). The electronic device 1420 may communicate with the user device 1400 using a wired or wireless network. For example, the electronic device 1420 communicates with the user device 1400 using a bluetooth connection so that voice and data (e.g., audio files and video files) may be exchanged between the two devices.

Referring to fig. 14B, in some examples, the digital assistant receives a voice input 1454, such as "show me a selfie picture i have just taken using the phone on this device". Based on the speech input 1454 and/or the contextual information, the digital assistant determines a user intent. For example, as shown in fig. 14B, the context information indicates that the user device 1400 is communicatively connected to the electronic device 1420 using a wired or wireless network (e.g., a bluetooth connection, a Wi-Fi connection, etc.). The context information also indicates that the user recently taken a selfie picture, which is stored in the electronic device 1420 named "selfie 0001". Thus, the digital assistant determines that the user's intent is to display a photo named selfie0001 stored in the electronic device 1420. Alternatively, the photograph may have been tagged by the photograph recognition software as containing the user's face and recognized accordingly.

As previously described, the digital assistant further determines whether the task is performed at the user device or at a first electronic device communicatively connected to the user device, depending on the user intent. In some embodiments, determining whether to perform the task at the user device or the first electronic device is based on one or more keywords included in the speech input. For example, the digital assistant determines that the speech input 1454 includes a keyword or phrase (such as "at this device") indicating that a task is to be performed on the user device 1400. Thus, the digital assistant determines that displaying the photo named selfie0001 stored in the electronic device 1420 is to be performed on the user device 1400. The user device 1400 and the electronic device 1420 are different devices. For example, user device 1400 may be a laptop computer, while electronic device 1420 may be a telephone.

In some embodiments, the digital assistant further determines whether content associated with the performance of the task is located at a remote location. If the content is located at a remote location when the digital assistant is determining, or is about to determine, which device to perform the task, at least a portion of the content for performing the task is not stored in the device determined to perform the task. For example, as shown in fig. 14B, when the digital assistant of the user device 1400 is determining, or is about to determine, that the user's intent is to display a photo named selfie0001 on the user device 1400, the photo named selfie0001 is not stored on the user device 1400, but is instead stored on the electronic device 1420 (e.g., smartphone). Thus, the digital assistant determines that the photograph is remotely located to the user device 1400.

As shown in fig. 14B, in some embodiments, in accordance with a determination that the task is to be performed at the user device and that content for performing the task is located at a remote location, the digital assistant of the user device receives the content for performing the task. In some examples, a digital assistant of the user device 1400 receives at least a portion of the content stored in the electronic device 1420. For example, to display a photo named selfie0001, the digital assistant of user device 1400 sends a request to electronic device 1420 to obtain the photo named selfie 0001. The electronic device 1420 receives the request and in response transmits a photograph named selfie0001 to the user device 1400. The digital assistant of user device 1400 then receives a photograph named selfie 0001.

As shown in fig. 14B, in some embodiments, the digital assistant provides a response at the user device after receiving the remotely located content. In some examples, providing the response includes performing the task using the received content. For example, the digital assistant of the user device 1400 displays a user interface 1442 (e.g., a snippet or window) that provides a view 1443 of a photograph named selfie 0001. View 1443 may be a preview (e.g., thumbnail) of a photograph named selfie0001, an icon, or a full view.

In some examples, providing the response includes providing a link associated with a task to be performed at the user device. The linking enables instantiation of the process. As previously described, instantiating a procedure includes calling a procedure that has not yet been run. If at least one instance of a process is running, instantiating the process includes executing an existing instance of the process or generating a new instance of the process. As shown in FIG. 14B, user interface 1442 may provide a link 1444 associated with a view 1443 of a photograph named selfie 0001. For example, link 1444 enables instantiation of a photo process to view a complete representation of a photo or to edit a photo. For example, link 1444 is displayed at the side of view 1443. As another example, view 1443 itself may include or incorporate link 1444 such that selection of view 1443 instantiates a photo process.

In some embodiments, providing a response includes providing one or more affordances that enable a user to further manipulate the results of performing the task. As shown in fig. 14B, in some examples, the digital assistant provides affordances 1445 and 1446 on a user interface 1442 (e.g., a snippet or window). The affordance 1445 may include a button for adding a photo to the album, and the affordance 1446 may include a button for canceling the view 1443 of the photo. The user may select one or both of the affordances 1445 and 1446. For example, in response to selecting affordance 1445, the photo process adds a photo associated with view 1443 to the album. For example, in response to selecting the affordance 1446, the photo process removes the view 1443 from the user interface 1442.

In some embodiments, providing the response includes providing spoken output in accordance with a task to be performed at the user device. As shown in FIG. 14B, a digital assistant represented by affordance 1440 or 1441 provides spoken output 1474, such as "this is the last selfie in your phone".

Referring to fig. 14C, in some examples, based on a single voice input/speech and contextual information, the digital assistant determines that the user intends to perform multiple tasks. As shown in fig. 14C, the digital assistant receives a voice input 1456 such as "show me the selfsame picture i just taken using the phone at this device and set it as my wallpaper". Based on the voice input 1456 and the contextual information, the digital assistant determines that the user's intent is to perform a first task of displaying a photo named selfie0001 stored on the electronic device 1420, and to perform a second task of setting the photo named selfie0001 as wallpaper. Thus, based on the single speech input 1456, the digital assistant determines that the user intends to perform multiple tasks.

In some embodiments, the digital assistant determines whether the plurality of tasks are to be performed at the user device or at an electronic device communicatively connected to the user device. For example, using the keyword "the device" included in the speech input 1456, the digital assistant determines that multiple tasks are to be performed on the user device 1400. Similar to those described above, the digital assistant further determines whether content for performing at least one task is located at a remote location. For example, the digital assistant determines that content for performing at least a first task (e.g., displaying a photo named selfie 0001) is located at a remote location. In some embodiments, in accordance with a determination that multiple tasks are to be performed at the user device and that content for performing at least one task is located at a remote location, the digital assistant requests the content from another electronic device (e.g., electronic device 1420), receives the content for performing the respective task, and provides a response at the user device.

In some embodiments, providing the response includes performing a plurality of tasks. For example, as shown in FIG. 14C, providing a response includes performing a first task that displays a view 1449 of a photograph named selfie0001, and performing a second task that sets the photograph named selfie0001 as wallpaper. In some examples, the digital assistant automatically configures the wallpaper into a photo named selfi0001 using a desktop settings configuration process. In some examples, the digital assistant provides a link to the desktop setting 1450 enabling the user to manually configure the wallpaper using a photo named selfie 0001. For example, a user may select a link to desktop setting 1450 by using an input device such as a mouse, stylus, or finger. Upon receiving a selection of a link to desktop settings 1450, the digital assistant initiates the desktop settings configuration process, enabling the user to select a photo named selfie0001 and set the photo as a wallpaper for user device 1400.

As shown in fig. 14C, in some examples, the digital assistant initiates a dialog with the user and facilitates configuring wallpaper in response to receiving voice input from the user. For example, the digital assistant provides spoken output 1476, such as "this is the last self-timer from your phone. Is it set as wallpaper? "the user provides a voice input, such as" good ". Upon receiving the voice input, the digital assistant instantiates a desktop setup configuration process to configure the wallpaper into a photo named selfie 0001.

As previously described, in some examples, the digital assistant determines the user intent based on the speech input and the contextual information. Referring to fig. 14D, in some examples, the speech input may not include information sufficient to determine the user's intent. For example, the voice input may not indicate a location of content for performing the task. As shown in fig. 14D, the digital assistant receives a voice input 1458, such as "show me a self-portrait photograph that i just shot". The voice input 1458 does not include one or more keywords indicating the location of the photograph to be displayed or the selfie picture to be displayed. Thus, the user intent may not be determined based solely on the speech input 1458. In some examples, the digital assistant determines the user intent based on the speech input 1458 and the contextual information. For example, based on the context information, the digital assistant determines that the user device 1400 is communicatively connected to the electronic device 1420. In some examples, the digital assistant instantiates a search process to search for photos recently taken by the user on the user device 1400 and the electronic device 1420. Based on the search results, the digital assistant determines that a photograph named selfie0001 is stored in the electronic device 1420. Thus, the digital assistant determines that the user's intent is to display a photo named selfie0001 located on the electronic device 1420. In some examples, if the user intent cannot be determined based on the speech input and the contextual information, the digital assistant may initiate a dialog with the user to further clarify or clarify the user intent.

As shown in fig. 14D, in some examples, the voice input may not include one or more keywords indicating whether the task was performed at the user device or at an electronic device communicatively connected to the user device. For example, the voice input 1458 does not indicate whether the task of displaying a selfie picture is performed on the user device 1400 or the electronic device 1420. In some examples, the digital assistant determines whether the task is performed at the user device or the electronic device based on the context information. As one example, the context information indicates that the digital assistant received the speech input 1458 on the user device 1400 rather than on the electronic device 1420. Thus, the digital assistant determines that the task of displaying the self-photograph will be performed on the user device 1400. As another example, the contextual information indicates that a photograph is to be displayed on the electronic device 1420 in accordance with user preferences. Thus, the digital assistant determines that the task of displaying the self-photograph will be performed on the electronic device 1420. It should be appreciated that the digital assistant can determine whether a task is performed at the user device or the electronic device based on any contextual information.

Referring to fig. 15A, in some embodiments, the digital assistant determines that the task is to be performed on an electronic device (e.g., electronic device 1520 and/or 1530) that is communicatively connected to the user device (e.g., user device 1500) and determines that the content is remotely located to the electronic device. As shown in fig. 15A, in some examples, the digital assistant receives a voice input 1552, such as "play this movie on my television". As previously described, the digital assistant can determine the user intent based on the speech input 1552 and the contextual information. For example, the context information indicates that the user interface 1542 is displaying a movie named abc. Thus, the digital assistant determines that the user intention is to play a movie named abc.

The digital assistant further determines whether the task is performed at the user device or at a first electronic device communicatively connected to the user device, based on the user intent. In some embodiments, determining whether to perform the task at the user device or the first electronic device is based on one or more keywords included in the speech input. For example, speech input 1552 includes the word or phrase "on my television". In some examples, the contextual information instructs the user device 1500 to connect to the set top box 1520 and/or television 1530 using, for example, a wired connection, a bluetooth connection, or a Wi-Fi connection. Mov, the digital assistant determines that the task of playing the movie named abc.mov will be performed on set top box 1520 and/or television 1530.

In some embodiments, the digital assistant further determines whether content associated with the performance of the task is located at a remote location. As previously described, if the content is located at a remote location when the digital assistant is determining or is about to determine which device to perform a task, at least a portion of the content for performing the task is not stored in the device determined to perform the task. For example, as shown in fig. 15A, at least a portion of a movie abc.mov is stored on user device 1500 (e.g., a laptop computer) and/or a server (not shown) and is not stored on set top box 1520 and/or television 1530 when the digital assistant of user device 1500 is determining, or is about to determine, that the movie abc.mov is playing on set top box 1520 and/or television 1530. Mov is remotely located to set top box 1520 and/or television 1530.

Referring to fig. 15B, in accordance with a determination that the task is to be performed on the first electronic device (e.g., set top box 1520 and/or television 1530) and that content for performing the task is remotely located to the first electronic device, the digital assistant of the user device provides content to the first electronic device to perform the task. For example, to play a movie abc.mov on set top box 1520 and/or television 1530, the digital assistant of user device 1500 transmits at least a portion of the movie abc.mov to set top box 1520 and/or television 1530.

In some examples, a digital assistant of a user device causes at least a portion of content to be provided to a first electronic device from another electronic device (e.g., a server) to perform a task, rather than providing such content from the user device. Mov is stored in a server (not shown) instead of the user device 1500, for example. Thus, the digital assistant of user device 1500 causes at least a portion of a movie named abc. mov to be transmitted from the server to set top box 1520 and/or television 1530. In some examples, content for performing the task is provided to the set top box 1520, which then transmits the content to the television 1530. In some examples, content for performing the task is provided directly to the television 1530.

As shown in fig. 15B, in some examples, after content is provided to a first electronic device (e.g., set top box 1520 and/or television 1530), a digital assistant of user device 1500 provides a response on user device 1500. In some examples, providing the response includes causing the task to be performed at the set top box 1520 and/or television 1530 using the content. For example, the digital assistant of user device 1500 sends a request to set top box 1520 and/or television 1530 to initiate a multimedia process to play movie abc. In response to the request, set top box 1520 and/or television 1530 initiate a multimedia process to play movie abc.

In some examples, the task to be performed on the first electronic device (e.g., set top box 1520 and/or television 1530) is a task that continues to be performed remotely at the first electronic device. For example, as shown in fig. 15A and 15B, the digital assistant of user device 1500 has initiated a multimedia process of user device 1500 to play a portion of movie abc.mov on user device 1500. In accordance with a determination that the user intent is to play the movie abc.mov on the first electronic device (e.g., the set top box 1520 and/or the television 1530), the digital assistant of the user device 1500 causes the first electronic device to continue playing the remainder of the movie abc.mov, rather than beginning to play. Thus, the digital assistant of user device 1500 enables the user to continuously watch a movie.

As shown in FIG. 15B, in some embodiments, providing a response includes providing one or more affordances that enable a user to further manipulate the results of performing the task. As shown in fig. 15B, in some examples, the digital assistant provides affordances 1547 and 1548 on a user interface 1544 (e.g., a snippet or window). The affordance 1547 may be a button to cancel playing a movie abc. mov on a first electronic device (e.g., the set top box 1520 and/or the television 1530). The affordance 1548 may be a button for pausing or resuming playing of a movie abc. The user may select the affordance 1547 or 1548 using an input device such as a mouse, stylus, or finger. For example, upon receiving the selection of the affordance 1547, the digital assistant causes the movie abc. In some examples, the digital assistant also causes playback of movie abc. mov to resume on the user device 1500 after playback is stopped at the first electronic device. For example, upon receiving a selection of the affordance 1548, the digital assistant causes a pause or a resume of playing a movie abc.

In some embodiments, providing the response includes providing a spoken output in accordance with a task to be performed at the first electronic device. As shown in FIG. 15B, a digital assistant represented by affordances 1540 or 1541 provides spoken output 1572, such as "you are playing your movie on television".

As previously described, in accordance with a determination that a task is to be performed at a first electronic device and that content for performing the task is remotely located to the first electronic device, the digital assistant provides content for performing the task to the first electronic device. Referring to fig. 15C, the content for performing the task may include, for example, a document (e.g., document 1560) or location information. For example, the digital assistant of user device 1500 receives a voice input 1556, such as "open this PDF on my tablet". The digital assistant determines that the user's intent is to perform the task of displaying the document 1560 and that the task is to be performed on a tablet 1532 communicatively connected to the user device 1500. Thus, the digital assistant provides the document 1560 to be displayed to the tablet 1532. As another example, the digital assistant of user device 1500 receives a voice input 1554, such as "send this location to my phone". The digital assistant determines that the user's intent is to perform a navigation task using the location information and that the task is to be performed on a phone 1522 (e.g., a smartphone) communicatively connected to the user device 1500. Thus, the digital assistant provides location information (e.g., 1234 main street) to the phone 1522 to perform the task of navigating.

As previously described, in some examples, the digital assistant provides a response at the user device after providing content to the first electronic device for performing the task. In some embodiments, providing the response includes causing the task to be performed at the first electronic device. For example, as shown in fig. 15D, the digital assistant of user device 1500 transmits a request to phone 1522 to perform a task of navigating to main street at location 1234. The digital assistant of user device 1500 further transmits a request to tablet 1532 to perform the task of displaying document 1560. In some examples, providing the response at the user device includes providing spoken output in accordance with a task to be performed at the first electronic device. As shown in fig. 15D, the digital assistant provides spoken output 1574 such as "display PDF on your tablet" and spoken output 1576 such as "navigate to 1234 main street on the phone".

As previously described, in some examples, the voice input may not include one or more keywords indicating whether the task was performed at the user device or at a first electronic device communicatively connected to the user device. Referring to fig. 16A, for example, the digital assistant receives a speech input 1652, such as "play this movie". Voice input 1652 does not indicate whether the task of playing the movie is performed on user device 1600 or on the first electronic device (e.g., set top box 1620 and/or television 1630, telephone 1622, or tablet 1632).

In some embodiments, to determine whether to perform the task at the user device or the first electronic device, the digital assistant of the user device determines whether performing the task at the user device satisfies a performance criterion. Performance criteria help evaluate the performance of a task. For example, as shown in fig. 16A, the digital assistant determines that the user's intent is to perform the task of playing movie abc. Performance criteria for playing a movie include, for example, quality criteria for playing the movie (e.g., 480p, 720p, 1080p), fluency criteria for playing the movie (e.g., no delay or no wait), screen size criteria (e.g., minimum screen size of 48 inches), sound effect criteria (e.g., stereo, number of speakers), and so forth. The performance criteria may be preconfigured and/or dynamically updated. In some examples, the performance criteria are determined based on contextual information such as user-specific data (e.g., user preferences), device configuration data (e.g., screen resolution and size of the electronic device), and the like.

In some examples, the digital assistant of user device 1600 determines that performing the task at the user device satisfies the performance criteria. For example, as shown in fig. 16A, user device 1600 may have a screen resolution, screen size, and sound effects that meet the performance standard for playing a movie abc. In accordance with a determination that performing the task at the user device 1600 satisfies the performance criteria, the digital assistant determines that the task is to be performed at the user device 1600.

In some examples, the digital assistant of user device 1600 determines that performing the task at the user device does not meet the performance criteria. For example, user device 1600 may not have a screen size, resolution, and/or sound effects that meet the performance standards for playing a movie, abc. In some examples, in accordance with a determination that performing the task at the user device does not meet the performance criteria, the digital assistant of the user device 1600 determines whether performing the task at the first electronic device meets the performance criteria. As shown in fig. 16B, the digital assistant of user device 1600 determines that the performance criteria are met by performing the task of playing movie abc. mov on set-top box 1620 and/or television 1630. For example, set-top box 1620 and/or television 1630 may have a screen size of 52 inches, may have a 1080p resolution, and may have eight speakers connected. Thus, the digital assistant determines that the task is to be performed on set top box 1620 and/or television 1630.

In some examples, the digital assistant of user device 1600 determines that performing the task at the first electronic device does not meet the performance criteria. In accordance with the determination, the digital assistant determines whether performing the task at the second electronic device satisfies the performance criteria. For example, as shown in FIG. 16B, television 1630 may have a screen resolution (e.g., 720p) that does not meet a performance standard (e.g., 1080 p). Thus, the digital assistant determines whether any of the phone 1622 (e.g., a smartphone) or the tablet 1632 meet the performance criteria.

In some examples, the digital assistant determines which device provides the best performance for the task. For example, as shown in fig. 16B, the digital assistant evaluates or estimates the performance of performing the task of playing movie abc. mov on user device 1600, set top box 1620 and television 1630, phone 1622, and tablet 1632, respectively. Based on the evaluation or estimate, the digital assistant determines whether performing a task on one device (e.g., user device 1600) is better than performing a task on another device (e.g., phone 1622) and determines the device that can achieve the best performance.

As previously described, in some examples, the digital assistant provides a response on user device 1600 in accordance with the device determined to be used to perform the task. In some embodiments, providing the response includes providing a spoken output in accordance with a task to be performed at the device. As shown in fig. 16B, a digital assistant represented by affordance 1640 or 1641 provides spoken output 1672, such as "will i play this movie on your television, continue? "in some examples, the digital assistant receives a speech input 1654 from the user, such as" good ". Mov will be caused to play on, for example, set top box 1620 and television 1630, and provide spoken output 1674, such as "you are playing your movie on your television".

In some examples, providing the response includes providing one or more affordances that enable the user to select another electronic device for performing the task. As shown in FIG. 16B, for example, the digital assistant provides affordances 1655A-B (e.g., a cancel button and a tablet button). Affordance 1655A enables a user to cancel playing a movie abc. mov on set-top box 1620 and television 1630. Affordance 1655B enables a user to select tablet 1632 to continue playing movie abc.

Referring to fig. 16C, in some embodiments, to determine a device to use to perform a task, the digital assistant of user device 1600 initiates a conversation with the user. For example, a digital assistant provides spoken output 1676, such as "should i play your movie on television or tablet? "the user provides a voice input 1656, such as" on my tablet ". Upon receiving the speech input 1656, the digital assistant determines that the task of playing the movie will be performed on the tablet 1632 that is communicatively connected to the user device 1600. In some examples, the digital assistant further provides spoken output 1678, such as "you are playing your movie on your tablet".

Referring to fig. 17A, in some embodiments, the digital assistant of user device 1700 continues to perform a portion of the tasks that were performed remotely at the first electronic device. In some embodiments, the digital assistant of the user device continues to perform tasks using content received from the third electronic device. As shown in fig. 17A, in some examples, the phone 1720 may already be performing a flight booking task using content from a third electronic device (such as a server 1730). For example, the user may already be using the phone 1720 to book a flight from kayak. Thus, the phone 1720 receives content transmitted from the server 1730 associated with kayak.com. In some examples, the user may be interrupted while booking a flight via the phone 1720 and may wish to continue booking a flight using the user device 1700. In some examples, the user may wish to continue booking flights simply because of the greater convenience of using the user device 1700. Thus, the user may provide a voice input 1752, such as "proceed to book flights on Kayak through my phone".

Referring to fig. 17B, upon receiving the voice input 1752, the digital assistant determines that the user intends to perform a task of flight booking. In some examples, the digital assistant further determines that the task is to be performed on the user device 1700 based on the context information. For example, the digital assistant determines that speech input 1752 was received on user device 1700, and thus determines that the task is to be performed on user device 1700. In some examples, the digital assistant further uses contextual information such as user preferences (e.g., user device 1700 was frequently used in the past for flight bookings) to determine that tasks are to be performed on the user device 1700.

As shown in fig. 17B, in accordance with a determination that the task is to be performed at the user device 1700 and that the content for performing the task is located at the remote location, the digital assistant receives the content for performing the task. In some examples, the digital assistant receives at least a portion of the content from the phone 1720 (e.g., a smartphone) and/or receives at least a portion of the content from the server 1730. For example, the digital assistant receives data from the phone 1720 indicating the status of the flight reservation so that the user device 1700 can continue with the flight reservation. In some examples, the data representing the flight reservation status is stored on a server 1730 (such as a server associated with kayak. Thus, the digital assistant receives data from the server 1730 for continuing flight bookings.

As shown in fig. 17B, after receiving content from the phone 1720 and/or the server 1730, the digital assistant provides a response on the user device 1700. In some examples, providing the response includes continuing to perform a flight booking task that was in part performed remotely on the phone 1720. For example, the digital assistant displays a user interface 1742 that enables the user to continue booking flights on kayak. In some examples, providing the response includes providing a link associated with a task to be performed on user device 1700. For example, the digital assistant displays a user interface 1742 (e.g., a snippet or window) that provides the current flight reservation status (e.g., displays bookable flights). The user interface 1742 also provides a link 1744 (e.g., a link to a Web browser) for continuing to perform the flight booking task. In some embodiments, the digital assistant also provides spoken output 1772, such as "this is a subscription on Kayak. Is you going on in your Web browser? "

As shown in fig. 17B and 17C, for example, if the user selects link 1744, the digital assistant instantiates a Web browsing process and displays user interface 1746 (e.g., a segment or window) for continuing with the flight booking task. In some examples, in response to spoken output 1772, the user provides a voice input 1756 (such as "ok") confirming that the user wishes to continue with the flight reservation using the Web browser of the user device 1700. Upon receiving the voice input 1756, the digital assistant instantiates a Web browsing process and displays a user interface 1746 (e.g., a snippet or window) for continuing the flight booking task.

Referring to fig. 17D, in some embodiments, the digital assistant of user device 1700 continues to perform a task that was in part performed remotely at the first electronic device. In some embodiments, the digital assistant of the user device continues to perform tasks using content received from the first electronic device instead of the third electronic device (such as a server). As shown in fig. 17D, in some examples, the first electronic device (e.g., the phone 1720 or the tablet 1732) may already be performing a task. For example, the user may already be using the phone 1720 to compose an email, or may already be using the tablet 1732 to edit a document such as a photograph. In some examples, the user is interrupted while using the phone 1720 or the tablet 1732, and/or wishes to continue performing tasks using the user device 1700. In some examples, the user may wish to continue performing the task simply because it is more convenient to use the user device 1700 (e.g., a large screen). Thus, the user may provide voice input 1758 (such as "open a document that i have just edited") or voice input 1759 (such as "open an email draft that i have just written").

Referring to fig. 17D, upon receiving the voice input 1758 or 1759, the digital assistant determines that the user's intent is to perform a task of editing a document or composing an email. Similar to those described above, in some examples, the digital assistant further determines that the task is to be performed on the user device 1700 based on the contextual information and determines that content for performing the task is located at a remote location. Similar to those described above, in some examples, the digital assistant determines that the content is located at the remote first electronic device (e.g., on the phone 1720 or the tablet 1732) based on the contextual information (e.g., user-specific data) rather than on the server. As shown in fig. 17D, in accordance with a determination that the task is to be performed at the user device 1700 and that the content for performing the task is located at the remote location, the digital assistant receives the content for performing the task. In some examples, the digital assistant receives at least a portion of the content from the phone 1720 (e.g., a smartphone) and/or at least a portion of the content from the tablet 1730. After receiving the content from the phone 1720 and/or the tablet 1732, the digital assistant provides a response at the user device 1700, such as displaying a user interface 1748 for the user to continue editing the document, and/or displaying a user interface 1749 for the user to continue composing the email. It should be appreciated that the digital assistant of user device 1700 may also cause the first electronic device to continue performing tasks that were in part performed remotely on user device 1700. For example, a user may compose an email on user device 1700 and may need to leave. The user provides a voice input such as "open my draft email written on my phone". Based on the voice input, the digital assistant determines that the user's intent is to continue performing tasks on the phone 1720 and that the content is located on the remote user device 1700. In some examples, the digital assistant provides content to the first electronic device for performing the task and causes the first electronic device to continue performing the task, similar to those described above.

Referring to fig. 17E, in some embodiments, continued execution of the task is based on context information shared or synchronized between multiple devices, including, for example, between the user device 1700 and the first electronic device (e.g., phone 1720). As previously described, in some examples, the digital assistant determines the user intent based on the speech input and the contextual information. The context information may be stored locally or remotely. For example, as shown in fig. 17E, the user provides a voice input 1760 to the phone 1720, such as "how weather in new york? ". The digital assistant of phone 1720 determines the user intent, performs tasks to obtain weather information for new york, and displays the weather information for new york on the user interface of phone 1720. The user then provides a voice input 1761 to the user device 1700, such as "how are los angeles? ". In some examples, the digital assistant of user device 1700 determines the user intent directly or through a server using contextual information stored on or shared through phone 1720. The context information includes, for example, user history data associated with the phone 1720, session state, system state, and the like. Both the user history data and the session status indicate that the user is querying weather information. Thus, the digital assistant of user device 1700 determines that the user's intent is to obtain weather information for los angeles. Based on the user intent, the digital assistant of the user device 1700 receives weather information from, for example, a server, and provides a user interface 1751 on the user device 1710 that displays the weather information.

6. Exemplary functionality of a digital Assistant-Voice enabled System configuration management

Fig. 18A-18F and 19A-19D illustrate functionality of a digital assistant to provide system configuration information or perform tasks in response to a user request. In some examples, a digital assistant system (e.g., digital assistant system 700) may be implemented by a user device according to various examples. In some examples, a user device, a server (e.g., server 108), or a combination thereof may implement a digital assistant system (e.g., digital assistant system 700). For example, the user device may be implemented using the device 104, 200, or 400. In some examples, the user device is a laptop computer, a desktop computer, or a tablet computer. The user device may operate in a multitasking environment, such as a desktop computer environment.

Referring to fig. 18A-18F and 19A-19D, in some examples, the user device provides various user interfaces (e.g., user interfaces 1810 and 1910). Similar to those described above, the user device displays various user interfaces on the display, and the various user interfaces enable a user to instantiate one or more processes (e.g., system configuration processes).

As shown in fig. 18A-18F and 19A-19D, similar to those described above, the user device displays affordances (e.g., affordances 1840 and 1940) on user interfaces (e.g., user interfaces 1810 and 1910) to facilitate instantiation of a digital assistant service.

Similar to those described above, in some examples, the digital assistant is instantiated in response to receiving a predetermined phrase. In some examples, a digital assistant is instantiated in response to receiving a selection of an affordance.

Referring to fig. 18A-18F and 19A-19D, in some embodiments, the digital assistant receives one or more speech inputs from a user, such as speech inputs 1852, 1854, 1856, 1858, 1860, 1862, 1952, 1954, 1956, and 1958. The user provides various speech inputs for the purpose of managing one or more system configurations of the user device. System configurations may include audio configurations, date and time configurations, dictation configurations, display configurations, input device configurations, notification configurations, print configurations, security configurations, backup configurations, application configurations, user interface configurations, and the like. To manage the audio configuration, the voice input may include "mute my microphone", "turn the volume up", "increase the volume by 10%", and so on. To manage date and time configuration, the voice input may include "what is my time zone? "," change my time zone to Kubino time "," add clock to London time zone ", etc. To manage the dictation configuration, the speech input may include "open dictation", "close dictation", "dictation in chinese", "enable high-level commands", etc. To manage the display configuration, the voice input may include "brighten my screen", "increase contrast by 20%", "extend my screen to a second monitor", "mirror my display", and so on. To manage input device configuration, voice input may include "connect my bluetooth keyboard", "zoom in my mouse pointer", and so on. To manage network configuration, voice input may include "turn Wi-Fi on", "turn Wi-Fi off", "which Wi-Fi network i will connect to? "," do my am connected to my phone? "and the like. To manage notification configurations, voice inputs may include "open 'do not disturb'," stop showing me these notifications "," show only new emails "," no prompts for text messages ", etc. To manage the print configuration, the voice input may include "do my printer have enough ink? "," does my printer have been connected? "and the like. To manage the security configuration, the voice input may include "change John's account password", "open firewall", "disable Cookie", and so on. To manage the backup configuration, the voice input may include "run backup immediately", "set the backup interval time once a month", "restore backup on 7/4/last year", and the like. To manage application configuration, voice input may include "change my default Web browser to Safari", "automatically log in to the 'message' application each time i log in", etc. To manage the user interface configuration, the voice input may include "change my desktop wallpaper", "hide a taskbar", "add Evernote to a taskbar", and the like. Various examples of using the voice input management system configuration are described in more detail below.

Similar to those described above, in some examples, the digital assistant receives voice input at the user device directly from the user or indirectly through another electronic device communicatively connected to the user device.

Referring to fig. 18A-18F and 19A-19D, in some embodiments, a digital assistant identifies contextual information associated with a user device. The context information includes, for example, user-specific data, sensor data, and user device configuration data. In some examples, the user-specific data includes log information indicating user preferences, a history of user interactions with the user device, and the like. For example, the user-specific data indicates the time the user system was last backed up; and user preferences for particular Wi-Fi networks when multiple Wi-Fi networks are available or the like. In some examples, the sensor data includes various data collected by the sensors. For example, the sensor data indicates the amount of printer ink collected by the printer ink amount sensor. In some examples, the user device configuration data includes a current device configuration and a historical device configuration. For example, the user device configuration data indicates that the user device is currently communicatively connected to one or more electronic devices using a bluetooth connection. The electronic device may include, for example, a smartphone, a set-top box, a tablet, etc. As described in more detail below, the user device may use the context information to determine a user intent and/or perform one or more processes.

Referring to fig. 18A-18F and 19A-19D, similar to those described above, in response to receiving a speech input, the digital assistant determines a user intent based on the speech input. The digital assistant determines the user intent based on the results of the natural language processing. For example, the digital assistant identifies an executable intent based on the user input and generates a structured query to represent the identified executable intent. The structured query includes one or more parameters associated with the actionable intent. The one or more parameters may be used to facilitate performance of the task based on the executable intent. For example, based on a voice input (such as "turn volume 10% up"), the digital assistant determines that the actionable intent is to adjust the system volume, and the parameters include setting the volume 10% above the current volume. In some embodiments, the digital assistant also determines the user intent based on the speech input and the contextual information. For example, the context information may indicate that the current volume of the user device is 50%. Thus, upon receiving a voice input (such as "turn volume up 10%"), the digital assistant determines that the user intends to increase the volume to 60%. Determining user intent based on speech input and contextual information is described in more detail in various examples below.

In some implementations, the digital assistant further determines whether the user intent indicates a request for information or a request to perform a task. Various examples of the determination are provided in more detail below with respect to fig. 18A-18F and 19A-19D.

Referring to fig. 18A, in some examples, a user device displays a user interface 1832 associated with performing a task. For example, the task includes composing a meeting invitation. In composing a meeting invitation, the user may wish to know the time zone of the user device so that the meeting invitation can be composed correctly. In some examples, the user provides speech input 1852 to invoke a digital assistant represented by the affordance 1840 or 1841. Speech input 1852 includes, for example, "hey, assistant. The user device receives speech input 1852 and, in response, invokes the digital assistant such that the digital assistant actively monitors for subsequent speech input. In some examples, the digital assistant provides a spoken output 1872 indicating that it has been invoked. For example, spoken output 1872 includes "start bar, i am listening".

Referring to fig. 18B, in some examples, the user provides a voice input 1854, such as "what is my time zone? "the digital assistant determines that the user's intent is to obtain a time zone for the user's device. The digital assistant further determines whether the user intent indicates a request for information or a request to perform a task. In some examples, determining whether the user intent is to indicate an information request or to indicate a request to perform a task includes determining whether the user intent is to change a system configuration. For example, the digital assistant determines that no system configuration has been changed based on determining that the user intent is to obtain a time zone for the user device. Thus, the digital assistant determines that the user intent indicates a request for information.

In some embodiments, in accordance with a determination that the user intent indicates an information request, the digital assistant provides a spoken response to the information request. In some examples, the digital assistant obtains the status of the one or more system configurations from the information request and provides a spoken response according to the status of the one or more system configurations. As shown in fig. 18B, the digital assistant determines that the user intent is to obtain a time zone for the user device, and that the user intent indicates an information request. Thus, the digital assistant obtains the time zone status from the time and date configuration of the user device. The time zone status indicates, for example, that the user device is set to the pacific time zone. Based on the time zone status, the digital assistant provides spoken output 1874, such as "your computer is set to pacific standard time". In some examples, the digital assistant further provides a link associated with the information request. As shown in fig. 18B, the digital assistant provides a link 1834 to enable the user to further manage the data and time configuration. In some examples, the user uses an input device (e.g., a mouse) to select link 1834. Upon receiving a user selection of link 1834, the digital assistant instantiates a date and time configuration process and displays an associated date and time configuration user interface. Thus, the user may further manage the date and time configuration using the date and time configuration user interface.

Referring to fig. 18C, in some examples, the user device displays a user interface 1836 associated with performing the task. For example, a task includes playing a video (e.g., abc. To enhance the experience of watching video, the user may wish to use a speaker and may want to know whether a bluetooth speaker is connected. In some examples, the user provides a voice input 1856, such as "is my bluetooth speaker connected? ". The digital assistant determines that the user's intent is to obtain a connection status for the bluetooth speaker 1820. The digital assistant further determines that obtaining the connection status of the bluetooth speaker 1820 does not change any system configuration and is therefore an information request.

In some embodiments, in accordance with a determination that the user intent indicates an information request, the digital assistant obtains a state of the system configuration from the information request and provides a spoken response in accordance with the state of the system configuration. As shown in fig. 18C, the digital assistant obtains the connection status from the network configuration of the user device. The connection status indicates, for example, that the user device 1800 is not connected to the bluetooth speaker 1820. Based on the connection status, the digital assistant provides spoken output 1876, such as "no, it is not connected, you can check for bluetooth devices in the network configuration". In some examples, the digital assistant further provides a link associated with the information request. As shown in fig. 18C, the digital assistant provides a link 1838 that enables the user to further manage the network configuration. In some examples, the user uses an input device (e.g., a mouse) to select link 1838. Upon receiving a user selection of link 1838, the digital assistant instantiates a network configuration process and displays an associated network configuration user interface. Thus, the user may further manage the network configuration using the network configuration user interface.

Referring to FIG. 18D, in some examples, the user device displays a user interface 1842 associated with performing the task. For example, a task includes viewing and/or editing a document. The user may wish to print a document and may want to know whether the printer 1830 has enough ink for the print job. In some examples, the user provides speech input 1858, such as "is my printer sufficient ink? "the digital assistant determines that the user's intent is to obtain the printer ink volume status of the printer. The digital assistant further determines that obtaining the printer ink volume status does not change any system configuration and is therefore a request for information.

In some embodiments, in accordance with a determination that the user intent indicates an information request, the digital assistant obtains a state of the system configuration from the information request and provides a spoken response in accordance with the state of the system configuration. As shown in fig. 18D, the digital assistant obtains the printer ink volume status from the printing configuration of the user device. The printer ink volume status indicates, for example, that the printer ink volume of the printer 1830 is 50%. Based on the connection status, the digital assistant provides spoken output 1878, such as "yes, your printer has enough ink. You can also look up the printer ink supply in the printer configuration. "in some examples, the digital assistant further provides a link associated with the information request. As shown in FIG. 18D, the digital assistant provides a link 1844 to enable the user to further manage the printer configuration. In some examples, the user selects link 1844 using an input device (e.g., a mouse). Upon receiving a user selection of a link, the digital assistant instantiates a printer configuration process and displays an associated printer configuration user interface. Thus, the user can further manage the printer configuration using the printer configuration user interface.

Referring to FIG. 18E, in some examples, the user device displays a user interface 1846 associated with performing the task. For example, a task includes browsing the internet using a Web browser (e.g., Safari). To browse the internet, a user may wish to learn about the available Wi-Fi networks and select one to connect to. In some examples, the user provides a voice input 1860, such as "what Wi-Fi networks are available? The digital assistant determines that the user's intent is to obtain a list of available Wi-Fi networks. The digital assistant further determines that obtaining the list of available Wi-Fi networks does not change any system configuration and is therefore an information request.

In some embodiments, in accordance with a determination that the user intent indicates an information request, the digital assistant obtains a state of the system configuration from the information request and provides a spoken response in accordance with the state of the system configuration. As shown in fig. 18E, the digital assistant obtains the status of the currently available Wi-Fi networks from the network configuration of the user device. The status of the currently available Wi-Fi network indicates that, for example, Wi-Fi network 1, Wi-Fi network 2, and Wi-Fi network 3 are available. In some examples, the status further indicates a signal strength of each of the Wi-Fi networks. The digital assistant displays a user interface 1845 that provides information according to the state. For example, the user interface 1845 provides a list of available Wi-Fi networks. The digital assistant also provides spoken output 1880, such as "this is a list of available Wi-Fi networks". In some examples, the digital assistant further provides a link associated with the information request. As shown in fig. 18E, the digital assistant provides a link 1847 to enable the user to further manage the network configuration. In some examples, the user selects link 1847 using an input device (e.g., a mouse). Upon receiving a user selection of link 1847, the digital assistant instantiates a network configuration process and displays an associated network configuration user interface. Thus, the user may further manage the configuration using the network configuration user interface.

Referring to FIG. 18F, in some examples, the user device displays a user interface 1890 associated with performing the task. For example, the task includes preparing a meeting agenda. In preparing a meeting agenda, a user may wish to find the date and time for the meeting. In some examples, the user provides a voice input 1862, such as "find a time on my calendar for holding a meeting in the morning on the next tuesday". The digital assistant determines that the user's intent is to look up an available time slot on the user's calendar that belongs to the next Tuesday morning. The digital assistant further determines that the lookup slot does not change any system configuration and is therefore a request for information.

In some embodiments, in accordance with a determination that the user intent indicates an information request, the digital assistant obtains a state of the system configuration from the information request and provides a spoken response in accordance with the state of the system configuration. As shown in fig. 18F, the digital assistant obtains the status of the user's calendar from the calendar configuration. The status of the user's calendar indicates that, for example, 9 am on tuesday or 11 am is still available. The digital assistant displays a user interface 1891 that provides information according to the status. For example, the user interface 1891 provides a user calendar around the date and time requested by the user. In some examples, the digital assistant also provides spoken output 1882, such as "look 9 am on tuesday or 11 am available". In some examples, the digital assistant further provides a link associated with the information request. As shown in FIG. 18F, the digital assistant provides a link 1849 to enable the user to further manage the calendar configuration. In some examples, the user selects link 1849 using an input device (e.g., a mouse). Upon receiving a user selection of link 1849, the digital assistant instantiates a calendar configuration process and displays an associated calendar configuration user interface. Thus, the user may further manage the configuration using a calendar configuration user interface.

Referring to fig. 19A, a user device displays a user interface 1932 associated with performing a task. For example, a task includes playing a video (e.g., abc. While playing the video, the user may wish to turn the volume up. In some examples, the user provides a voice input 1952, such as "turn volume on to maximum". The digital assistant determines that the user's intent is to increase the volume to its maximum level. The digital assistant further determines whether the user intent indicates a request for information or a request to perform a task. For example, based on determining that the user intent is to increase the volume of the user device, the digital assistant determines that some audio configuration is to be changed, and thus the user intent indicates a request to perform a task.

In some embodiments, in accordance with a determination that the user intent indicates a request to perform a task, the digital assistant instantiates a process associated with the user device to perform the task. Instantiating a procedure includes calling a procedure that has not yet been run. If at least one instance of a process is running, instantiating the process includes executing an existing instance of the process or generating a new instance of the process. For example, instantiating the audio configuration process includes calling the audio configuration process using an existing audio configuration process, or generating a new instance of the audio configuration process. In some examples, instantiating the process includes performing the task using the process. For example, as shown in fig. 19A, the digital assistant instantiates an audio configuration process to set the volume to its maximum level, according to the user intent being to increase the volume to its maximum level. In some examples, the digital assistant further provides spoken output 1972, such as "determine i turned the volume to the maximum".

Referring to fig. 19B, the user device displays a user interface 1934 associated with performing the task. For example, a task includes viewing or editing a document. The user may wish to reduce the screen brightness to protect the eyes. In some examples, the user provides a voice input 1954, such as "set screen brightness to 10% lower". The digital assistant determines the user intent based on the speech input 1954 and the contextual information. For example, the context information indicates that the current brightness is configured to 90%. Thus, the digital assistant determines that the user's intent is to reduce the brightness level from 90% to 80%. The digital assistant further determines whether the user intent indicates a request for information or a request to perform a task. For example, based on determining that the user intent is to change the screen brightness to 80%, the digital assistant determines that some display configuration is to be changed, and thus the user intent indicates a request to perform a task.

In some embodiments, in accordance with a determination that the user intent indicates a request to perform a task, the digital assistant instantiates a process to perform the task. For example, as shown in fig. 19B, the digital assistant instantiates a display configuration process to reduce the brightness level to 80% according to the user's intent to change the brightness level. In some examples, the digital assistant further provides spoken output 1974, such as "determine i adjusted screen brightness to 80%". In some examples, as shown in fig. 19B, the digital assistant provides an affordance 1936 that enables a user to manipulate the results of performing a task. For example, affordance 1936 may be a slider that allows a user to further change the brightness level.

Referring to fig. 19C, the user device displays a user interface 1938 associated with performing the task. For example, a task includes providing one or more notifications. The notification may include a reminder, message, reminder, etc. of the email. In some examples, the notification is provided in a user interface 1938. The notification may be displayed or provided to the user in real time or immediately after being available at the user device. For example, after the user device receives the notification, the notification appears immediately on user interface 1938 and/or user interface 1910. Sometimes, the user may be performing an important task (e.g., editing a document) and may not want to be distracted by the notification. In some examples, the user provides a voice input 1956, such as "do not notify me when email is incoming. The digital assistant determines that the user intent is to turn off the prompt for the email. Based on determining that the user intent is a prompt to close the incoming email, the digital assistant determines that the notification configuration is to be changed, so the user intent indicates a request to perform a task.

In some embodiments, in accordance with a determination that the user intent indicates a request to perform a task, the digital assistant instantiates a process to perform the task. For example, as shown in FIG. 19C, the digital assistant instantiates a notification configuration process to close the prompt for the email, according to the user's intent. In some examples, the digital assistant further provides spoken output 1976, such as "determine i closed the notification for mail. In some examples, as shown in fig. 19C, the digital assistant provides a user interface 1942 (e.g., a segment or window) that enables a user to manipulate the results of performing a task. For example, user interface 1942 provides an affordance 1943 (e.g., a cancel button). If the user wishes to continue receiving notifications of e-mail, for example, the user may select affordance 1943 to reopen notifications regarding e-mail. In some examples, the user may also provide other voice inputs, such as a notification of "notify me when email is incoming" to open email.

Referring to fig. 19D, in some embodiments, the digital assistant may not be able to complete the task based on the user's speech input, and thus a user interface may be provided to enable the user to perform the task. As shown in fig. 19D, in some examples, the user provides a voice input 1958, such as "display custom message on my screen saver". The digital assistant determines that the user's intent is to change the screen saver setting to display a custom message. The digital assistant further determines that the user intent is to change the display configuration, so the user intent indicates a request to perform a task.

In some embodiments, in accordance with a determination that the user intent indicates a request to perform a task, the digital assistant instantiates a process associated with the user device to perform the task. In some examples, if the digital assistant is unable to complete a task based on the user's intent, it provides a user interface that enables the user to perform the task. For example, based on the voice input 1958, the digital assistant may not be able to determine the content of the custom message to be displayed on the screen saver and thus may not be able to complete the task of displaying the custom message. As shown in fig. 19D, in some examples, the digital assistant instantiates a display configuration process and provides a user interface 1946 (e.g., a snippet or window) to enable a user to manually alter screen saver settings. As another example, the digital assistant provides a link 1944 (e.g., a link to a display configuration) that enables the user to perform a task. The user selects link 1944 by using an input device such as a mouse, finger, or stylus. Upon receiving the user's selection, the digital assistant instantiates a display configuration process and displays the user interface 1946 to enable the user to alter the screen saver settings. In some examples, the digital assistant further provides spoken output 1978, such as "you can browse screen saver options in a screen saver configuration.

7. Procedures for operating a digital assistant-intelligent search and object management.

Fig. 20A-20G illustrate a flow diagram of an exemplary process 2000 for operating a digital assistant, according to some embodiments. Process 2000 may be performed using one or more devices 104, 108, 200, 400, or 600 (fig. 1, 2A, 4, or 6A-6B). The operations in process 2000 are optionally combined or split, and/or the order of some operations is optionally altered.

Referring to FIG. 20A, at block 2002, an affordance that will invoke a digital assistant service is displayed on a display associated with a user device prior to receiving a first speech input. At block 2003, the digital assistant is invoked in response to receiving the predetermined phrase. At block 2004, the digital assistant is invoked in response to receiving a selection of the affordance.

At block 2006, a first speech input is received from a user. At block 2008, contextual information associated with the user device is identified. At block 2009, the context information includes at least one of: user-specific data, metadata associated with one or more objects, sensor data, and user device configuration data.

At block 2010, a user intent is determined based on the first speech input and the contextual information. At block 2012, to determine the user intent, one or more executable intents are determined. At block 2013, one or more parameters associated with the actionable intent are determined.

Referring to fig. 20B, at block 2015, it is determined whether the user intends to perform a task using a search process or an object management process. The search process is configured to search data stored inside or outside the user device, and the object management process is configured to manage objects associated with the user device. At block 2016, it is determined whether the voice input includes one or more keywords that represent a search process or an object management process. At block 2018, a determination is made as to whether the task is associated with a search. At block 2020, in accordance with a determination that the task is associated with a search, it is determined whether a search process is required to perform the task. At block 2021, in accordance with a determination that a search process is not required to perform the task, a voice request is output to select the search process or the object management process, and a second voice input is received from the user. The second speech input indicates a selection of a search process or an object management process.

At block 2022, in accordance with a determination that a search process is not required to perform the task, it is determined whether the task is to be performed using the search process or the object management process based on a predetermined configuration.

Referring to FIG. 20C, at block 2024, in accordance with a determination that the task is not associated with a search, it is determined whether the task is associated with managing at least one object. At block 2025, in accordance with a determination that the task is not associated with managing the at least one object, at least one of: a fourth process is determined whether the task can be performed using available user equipment and a dialog is initiated with the user.

At block 2026, in accordance with a determination that the user intent is to perform the task using a search process, the task is performed using the search process. At block 2028, at least one object is searched using a search process. At block 2029, the at least one object includes at least one of a folder or a file. At block 2030, the file includes at least one of a photograph, audio, or video. At block 2031, the file is stored inside or outside the user device. At block 2032, the search for at least one of the folders or files is based on metadata associated with the folder or file. At block 2034, at least one object includes a communication. At block 2035, the communication includes at least one of an email, a message, a notification, or a voicemail. At block 2036, metadata associated with the communication is searched.

Referring to fig. 20D, at block 2037, the at least one object includes at least one of a contact or a calendar. At block 2038, at least one object includes an application. At block 2039, the at least one object includes a source of presence information.

At block 2040, in accordance with a determination that the user intent is to perform a task using an object management process, the task is performed using the object management process. At block 2042, the task is associated with a search and at least one object is searched using an object management process. At block 2043, the at least one object includes at least one of a folder or a file. At block 2044, the file includes at least one of a photograph, audio, or video. At block 2045, the file is stored internally or externally to the user device. At block 2046, the search for at least one of the folders or files is based on metadata associated with the folder or file.

At block 2048, the object management process is instantiated. Instantiating the object management process includes invoking the object management process, generating a new instance of the object management process, or executing an existing instance of the object management process.

Referring to FIG. 20E, at block 2049, at least one object is created. At block 2050, at least one object is stored. At block 2051, at least one object is compressed. At block 2052, at least one object is moved from the first physical storage device or virtual storage device to the second physical storage device or virtual storage device. At block 2053, at least one object is copied from the first physical or virtual storage to the second physical or virtual storage. At block 2054, at least one object stored in the physical storage or the virtual storage is deleted. At block 2055, at least one object stored in the physical storage or the virtual storage is restored. At block 2056, at least one object is marked. At least one of a marking of the at least one object is visible or associated with metadata of the at least one object. At block 2057, at least one object is backed up according to a predetermined time period of backup. At block 2058, at least one object is shared among one or more electronic devices communicatively connected to the user device.

Referring to FIG. 20F, at block 2060, a response is provided based on the results of performing the task using the search process or the object management process. At block 2061, a first user interface is displayed that provides results of performing a task using a search process or an object management process. At block 2062, links associated with results of performing tasks using the search process are displayed. At block 2063, spoken output is provided as a function of results of performing the task using the search process or the object management process.

At block 2064, an affordance is provided that enables a user to manipulate results of performing a task using a search process or an object management process. At block 2065, a third process that operates using results of executing tasks is instantiated.

Referring to fig. 20F, at block 2066, a confidence level is determined. At block 2067, the confidence level represents the accuracy of determining the user's intent based on the first speech input and the contextual information associated with the user device. At block 2068, the confidence level represents the accuracy of determining whether the user intends to perform a task using a search process or an object management process.

Referring to FIG. 20G, at block 2069, the confidence level represents the accuracy of performing a task using a search process or an object management process.

At block 2070, a response is provided according to the determined confidence level. At block 2071, it is determined whether the confidence level is greater than or equal to a threshold confidence level. At block 2072, in accordance with a determination that the confidence level is greater than or equal to the threshold confidence level, a first response is provided. At block 2073, in accordance with a determination that the confidence level is less than the threshold confidence level, a second response is provided.

8. Procedure for operating a digital assistant-continuity.

Fig. 21A-21E illustrate a flow diagram of an exemplary process 2100 for operating a digital assistant, according to some embodiments. Process 2100 may be performed using one or more devices 104, 108, 200, 400, 600, 1400, 1500, 1600, or 1700 (fig. 1, fig. 2A, fig. 4, fig. 6A-6B, fig. 14A-14D, fig. 15A-15D, fig. 16A-16C, and fig. 17A-17E). Some operations in process 2100 are optionally combined or split, and/or the order of some operations is optionally changed.

Referring to FIG. 21A, at block 2102, an affordance that will invoke a digital assistant service is displayed on a display associated with a user device prior to receiving a first voice input. At block 2103, the digital assistant is invoked in response to receiving the predetermined phrase. At block 2104, the digital assistant is invoked in response to receiving a selection of the affordance.

At block 2106, a first speech input is received from a user to perform a task. At block 2108, contextual information associated with the user device is identified. At block 2109, a user device is configured to provide a plurality of user interfaces. At block 2110, the user device includes a laptop computer, a desktop computer, or a server. At block 2112, the context information includes at least one of: user-specific data, metadata associated with one or more objects, sensor data, and user device configuration data.

At block 2114, a user intent is determined based on the speech input and the contextual information. At block 2115, to determine the user intent, one or more executable intents are determined. At block 2116, one or more parameters associated with the actionable intent are determined.

Referring to FIG. 21B, at block 2118, it is determined whether the task is to be performed at the user device or at a first electronic device communicatively connected to the user device, as a function of the user intent. At block 2120, the first electronic device comprises a laptop computer, a desktop computer, a server, a smartphone, a tablet, a set-top box, or a watch. At block 2121, a determination is made whether the task is to be performed at the user device or the first electronic device based on one or more keywords included in the speech input. At block 2122, it is determined whether performing the task at the user device satisfies a performance criterion. At block 2123, performance criteria are determined based on one or more user preferences. At block 2124, performance criteria are determined based on the device configuration data. At block 2125, performance criteria are dynamically updated. At block 2126, in accordance with a determination that performing the task at the user device satisfies the performance criteria, it is determined that the task is to be performed at the user device.

Referring to FIG. 21C, at block 2128, in accordance with a determination that performing the task at the user device does not meet the performance criteria, a determination is made whether performing the task at the first electronic device meets the performance criteria. At block 2130, in accordance with a determination that performing the task at the first electronic device satisfies the performance criteria, it is determined that the task will be performed at the first electronic device. At block 2132, in accordance with a determination that performance criteria are not met for performing the task at the first electronic device, a determination is made whether performance criteria are met for performing the task at the second electronic device.

At block 2134, in accordance with a determination that the task is to be performed at the user device and that content for performing the task is located at a remote location, content for performing the task is received. At block 2135, at least a portion of the content is received from the first electronic device. At least a portion of the content is stored in the first electronic device. At block 2136, at least a portion of the content is received from a third electronic device.

Referring to fig. 21D, at block 2138, in accordance with a determination that the task is to be performed at the first electronic device and that content for performing the task is remotely located to the first electronic device, the content for performing the task is provided to the first electronic device. At block 2139, at least a portion of the content is provided from the user device to the first electronic device. At least a portion of the content is stored in the user device. At block 2140, at least a portion of the content is caused to be provided from the fourth electronic device to the first electronic device. At least a portion of the content is stored in a fourth electronic device.

At block 2142, the task is to be performed at the user device. A first response is provided at the user device using the received content. At block 2144, the task is performed at the user device. At block 2145, performing the task at the user device is a continuation of the task performed remotely, in part, to the user device. At block 2146, a first user interface associated with a task to be performed at the user device is displayed. At block 2148, a link associated with the task is to be performed at the user device. At block 2150, spoken output is provided according to a task to be performed at the user device.

Referring to fig. 21E, at block 2152, the task is to be performed at the first electronic device and a second response is provided at the user device. At block 2154, the task is to be performed at the first electronic device. At block 2156, the task to be performed at the first electronic device is a continuation of a task performed remotely to the first electronic device. At block 2158, spoken output is provided according to a task to be performed at the first electronic device. At block 2160, spoken output is provided in accordance with a task to be performed at the first electronic device.

9. Procedure for operating a digital assistant-system configuration management.

Fig. 22A-22D illustrate a flow diagram of an exemplary process 2200 for operating a digital assistant, according to some embodiments. Process 2200 may be performed using one or more devices 104, 108, 200, 400, 600, or 1800 (fig. 1, fig. 2A, fig. 4, fig. 6A-6B, and fig. 18C-18D). The operations in process 2200 are optionally combined or split, and/or the order of some operations is optionally altered.

Referring to FIG. 22A, at block 2202, an affordance that will invoke a digital assistant service is displayed on a display associated with a user device prior to receiving a voice input. At block 2203, the digital assistant is invoked in response to receiving the predetermined phrase. At block 2204, the digital assistant is invoked in response to receiving a selection of the affordance.

At block 2206, voice input is received from a user to manage one or more system configurations of the user device. The user device is configured to provide multiple user interfaces simultaneously. At block 2207, one or more system configurations of the user device include an audio configuration. At block 2208, the one or more system configurations of the user device include a date and time configuration. At block 2209, the one or more system configurations of the user device include a spoken configuration. At block 2210, the one or more system configurations of the user device include a display configuration. At block 2211, the one or more system configurations of the user device include an input device configuration. At block 2212, the one or more system configurations of the user device include a network configuration. At block 2213, the one or more system configurations of the user device include a notification configuration.

Referring to fig. 22B, at block 2214, the one or more system configurations of the user device include a printer configuration. At block 2215, the one or more system configurations of the user device include a security configuration. At block 2216, the one or more system configurations of the user device comprise a backup configuration. At block 2217, the one or more system configurations of the user device comprise an application configuration. At block 2218, the one or more system configurations of the user device comprise a user interface configuration.

At block 2220, context information associated with the user device is identified. At block 2223, the context information includes at least one of: user-specific data, device configuration data, and sensor data. At block 2224, a user intent is determined based on the speech input and the contextual information. At block 2225, one or more actionable intents are determined. At block 2226, one or more parameters associated with the actionable intent are determined.

Referring to FIG. 22C, at block 2228, it is determined whether the user intent indicates an information request or a request to perform a task. At block 2229, it is determined whether the user's intent will change the system configuration.

At block 2230, in accordance with a determination that the user intent indicates an information request, a spoken response to the information request is provided. At block 2231, a status of one or more system configurations is obtained from the information request. At block 2232, a spoken response is provided according to the state of the one or more system configurations.

At block 2234, in addition to providing a spoken response to the information request, a first user interface is displayed that provides information according to a state of the one or more system configurations. At block 2236, in addition to providing the spoken response to the information request, a link associated with the information request is provided.

At block 2238, in accordance with a determination that the user intent indicates a request to perform a task, a process associated with the user device is instantiated to perform the task. At block 2239, the task is performed using the process. At block 2240, a first spoken output is provided based on the results of performing the task.

Referring to FIG. 22D, at block 2242, a second user interface is provided to enable a user to manipulate the results of performing the task. At block 2244, the second user interface includes a link associated with a result of performing the task.

At block 2246, a third user interface is provided to enable the user to perform the task. At block 2248, the third user interface includes a link that enables the user to perform the task. At block 2250, a second spoken output associated with the third user interface is provided.

10. Electronic device-Intelligent search and object management

Fig. 23 shows a functional block diagram of an electronic device 2300 configured according to the principles of various described examples, including those described with reference to fig. 8A-8F, 9A-9H, 10A-10B, 11A-11D, 12A-12D, 13A-13C, 14A-14D, 15A-15D, 16A-16C, 17A-17E, 18A-18F, and 19A-19D. The functional blocks of the device can optionally be implemented by hardware, software, or a combination of hardware and software that perform the principles of the various described examples. Those skilled in the art will appreciate that the functional blocks described in fig. 23 may optionally be combined or separated into sub-blocks in order to implement the principles of the various examples. Thus, the description herein optionally supports any possible combination, separation, or further definition of the functional blocks described herein.

As shown in fig. 23, electronic device 2300 may include a microphone 2302 and a processing unit 2308. In some examples, the processing unit 2308 includes a receiving unit 2310, an identifying unit 2312, a determining unit 2314, an executing unit 2316, a providing unit 2318, an instantiating unit 2320, a displaying unit 2322, an outputting unit 2324, a starting unit 2326, a searching unit 2328, a generating unit 2330, an executing unit 2332, a creating unit 2334, an instantiating unit 2335, a storing unit 2336, a compressing unit 2338, a copying unit 2340, a deleting unit 2342, a restoring unit 2344, a marking unit 2346, a backup unit 2348, a sharing unit 2350, a causing unit 2352 and an obtaining unit 2354.

In some examples, processing unit 2308 is configured to receive (e.g., with receiving unit 2310) a first voice input from a user; identifying (e.g., with the identification unit 2312) contextual information associated with the user device; and determining (e.g., with the determining unit 2314) the user intent based on the first speech input and the context information.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether the user intends to perform a task using a search process or an object management process. The search process is configured to search data stored inside or outside the user device, and the object management process is configured to manage objects associated with the user device.

In some examples, in accordance with a determination that the user intent is to perform a task using a search process, the processing unit 2308 is configured to perform (e.g., with the execution unit 2316) the task using the search process. In some examples, in accordance with a determination that the user intent is to perform a task using an object management process, the processing unit 2308 is configured to perform (e.g., with the execution unit 2316) the task using the object management process.

In some examples, prior to receiving the first speech input, processing unit 2308 is configured to display (e.g., with display unit 2322) the affordance on a display associated with the user device to invoke the digital assistant service.

In some examples, processing unit 2308 is configured to invoke (e.g., with invoking unit 2320) the digital assistant in response to receiving the predetermined phrase.

In some examples, the processing unit 2308 is configured to invoke (e.g., with the invoking unit 2320) the digital assistant in response to receiving the selection of the affordance.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) one or more actionable intents; and determining (e.g., with determining unit 2314) one or more parameters associated with the actionable intent.

In some examples, the context information includes at least one of: user-specific data, metadata associated with one or more objects, sensor data, and user device configuration data.

In some examples, the processing unit 2308 is configured to determine (e.g., using the determining unit 2314) whether the voice input includes one or more keywords representing a search process or an object management process.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether the task is associated with a search. In accordance with a determination that the task is associated with a search, the processing unit 2308 is configured to determine (e.g., with the determination unit 2314) whether performing the task requires a search process; and in accordance with a determination that the task is not associated with the search, determining (e.g., with determining unit 2314) whether the task is associated with managing at least one object.

In some examples, the task is associated with a search, and in accordance with a determination that a search process is not required to perform the task, the processing unit 2308 is configured to output (e.g., with the output unit 2324) a spoken request indicating a selection of the search process or the object management process, or receive (e.g., with the receiving unit 2310) a second voice input from the user indicating a selection of the search process or the object management process.

In some examples, the task is associated with a search, and in accordance with a determination that a search process is not required to perform the task, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether the task is to be performed using the search process or an object management process based on a predetermined configuration.

In some examples, the task is not associated with a search, and in accordance with a determination that the task is not associated with managing the at least one object, the processing unit 2308 is configured to perform (e.g., with the execution unit 2316) at least one of: determining (e.g., with the determining unit 2314) whether a task can be performed using a fourth procedure available to the user device; and initiate (e.g., with initiating unit 2326) a dialog with the user.

In some examples, the processing unit 2308 is configured to search (e.g., with the search unit 2328) for at least one object using a search process.

In some examples, the at least one object includes at least one of a folder or a file. The file includes at least one of a photograph, audio, or video. The file is stored inside or outside the user device.

In some examples, searching for at least one of the folders or files is based on metadata associated with the folders or files.

In some examples, the at least one object includes a communication. The communication includes at least one of an email, a message, a notification, or a voicemail.

In some examples, the processing unit 2308 is configured to search (e.g., with the search unit 2328) for metadata associated with the communication.

In some examples, the at least one object includes at least one of a contact or a calendar.

In some examples, the at least one object includes an application.

In some examples, the at least one object includes an online information source.

In some examples, the task is associated with a search, and the processing unit 2308 is configured to search (e.g., with the search unit 2328) for at least one object using an object management process.

In some examples, the at least one object includes at least one of a folder or a file. The file includes at least one of a photograph, audio, or video. The file is stored inside or outside the user device.

In some examples, searching for at least one of the folders or files is based on metadata associated with the folders or files.

In some examples, processing unit 2308 is configured to instantiate (e.g., with instantiation unit 2335) an object management process. Instantiation of an object management process includes calling the object management process, generating a new instance of the object management process, or executing an existing instance of the object management process.

In some examples, processing unit 2308 is configured to create (e.g., with creating unit 2334) at least one object.

In some examples, processing unit 2308 is configured to store (e.g., with storage unit 2336) at least one object.

In some examples, processing unit 2308 is configured to compress (e.g., with compression unit 2338) at least one object.

In some examples, processing unit 2308 is configured to move (e.g., with moving unit 2339) at least one object from a first physical or virtual storage to a second physical or virtual storage.

In some examples, the processing unit 2308 is configured to copy (e.g., using the copy unit 2340) at least one object from a first physical or virtual storage to a second physical or virtual storage.

In some examples, processing unit 2308 is configured to delete (e.g., with deletion unit 2342) at least one object stored in physical storage or virtual storage.

In some examples, processing unit 2308 is configured to restore (e.g., with restoring unit 2344) at least one object stored in physical storage or virtual storage.

In some examples, processing unit 2308 is configured to mark (e.g., with marking unit 2346) at least one object. At least one of a marking of the at least one object is visible or associated with metadata of the at least one object.

In some examples, processing unit 2308 is configured to backup (e.g., with backup unit 2348) at least one object according to a predetermined time period for backup.

In some examples, the processing unit 2308 is configured to share (e.g., with the sharing unit 2350) at least one object among one or more electronic devices communicatively connected to the user device.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a response based on results of performing a task using a search process or an object management process.

In some examples, the processing unit 2308 is configured to display (e.g., with the display unit 2322) a first user interface that provides results of performing tasks using a search process or an object management process.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a link associated with a result of performing a task using a search process.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) spoken output based on results of performing tasks using a search process or an object management process.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) an affordance that enables a user to manipulate results of performing a task using a search process or an object management process.

In some examples, processing unit 2308 is configured to instantiate (e.g., with instantiation unit 2335) a third process that operates using results of executing the task.

In some examples, processing unit 2308 is configured to determine (e.g., with determining unit 2314) a confidence level; and providing (e.g., with the providing unit 2318) a response according to the determined confidence level.

In some examples, the confidence level represents an accuracy of determining the user intent based on the first speech input and the contextual information associated with the user device.

In some examples, the confidence level represents the accuracy of determining whether the user intends to perform a task using a search process or an object management process.

In some examples, the confidence level represents the accuracy of performing a task using a search process or an object management process.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether the confidence level is greater than or equal to a threshold confidence level. In accordance with a determination that the confidence level is greater than or equal to the threshold confidence level, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a first response; and in accordance with a determination that the confidence level is less than the threshold confidence level, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a second response.

11. Electronic device-continuity

In some examples, processing unit 2308 is configured to receive (e.g., with receiving unit 2310) voice input from a user to perform a task; identifying (e.g., with the identification unit 2312) contextual information associated with the user device; and determining (e.g., with the determining unit 2314) the user intent based on the voice input and the contextual information associated with the user device.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether the task is to be performed at the user device or at a first electronic device communicatively connected to the user device, depending on the user intent.

In some examples, in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, the processing unit 2308 is configured to receive (e.g., with the receiving unit 2310) the content for performing the task.

In some examples, in accordance with a determination that the task is to be performed at the first electronic device and that content for performing the task is remotely located to the first electronic device, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) the content for performing the task to the first electronic device.

In some examples, a user device is configured to provide multiple user interfaces.

In some examples, the user device comprises a laptop computer, a desktop computer, or a server.

In some examples, the first electronic device comprises a laptop computer, a desktop computer, a server, a smartphone, a tablet, a set-top box, or a watch.

In some examples, processing unit 2308 is configured to display (e.g., with display unit 2322) an affordance for invoking a digital assistant on a display of a user device prior to receiving the voice input.

In some examples, processing unit 2308 is configured to invoke (e.g., with invoking unit 2320) the digital assistant in response to receiving the predetermined phrase.

In some examples, the processing unit 2308 is configured to invoke (e.g., with the invoking unit 2320) the digital assistant in response to receiving the selection of the affordance.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) one or more actionable intents; and determining (e.g., with determining unit 2314) one or more parameters associated with the actionable intent.

In some examples, the context information includes at least one of: user-specific data, sensor data, and user device configuration data.

In some examples, determining whether the task is to be performed at the user device or the first electronic device is based on one or more keywords included in the speech input.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether performing the task at the user device meets performance criteria.

In some examples, in accordance with a determination that performing the task at the user device satisfies the performance criteria, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) that the task is to be performed at the user device.

In some examples, in accordance with a determination that performing the task at the user device does not meet the performance criteria, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether performing the task at the first electronic device meets the performance criteria.

In some examples, in accordance with a determination that performing the task at the first electronic device satisfies the performance criteria, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) that the task is to be performed at the first electronic device.

In some examples, in accordance with a determination that performing the task at the first electronic device does not satisfy the performance criteria, the processing unit 2308 is configured to determine (e.g., with the determination unit 2314) whether performing the task at the second electronic device satisfies the performance criteria.

In some examples, the performance criteria are determined based on one or more user preferences.

In some examples, the performance criteria are determined based on device configuration data.

In some examples, the performance criteria are dynamically updated.

In some examples, in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, the processing unit 2308 is configured to receive (e.g., with the receiving unit 2310) at least a portion of the content from the first electronic device, where the at least a portion of the content is stored in the first electronic device.

In some examples, in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, the processing unit 2308 is configured to receive (e.g., with the receiving unit 2310) at least a portion of the content from a third electronic device.

In some examples, in accordance with a determination that the task is to be performed at the first electronic device and that content for performing the task is remotely located to the first electronic device, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) at least a portion of the content from the user device to the first electronic device, wherein the at least a portion of the content is stored in the user device.

In some examples, in accordance with a determination that the task is to be performed at the first electronic device and that content for performing the task is remotely located to the first electronic device, the processing unit 2308 is configured to cause (e.g., with the causing unit 2352) content to be provided to the first electronic device from the fourth electronic device. At least a portion of the content is stored in a fourth electronic device.

In some examples, the task is to be performed at the user device, and the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) the first response at the user device using the received content.

In some examples, the processing unit 2308 is configured to perform (e.g., with the execution unit 2316) the task at the user device.

In some examples, performing the task at the user device is a continuation of the task performed remotely, in part, to the user device.

In some examples, the processing unit 2308 is configured to display (e.g., with the display unit 2322) a first user interface associated with a task to be performed at a user device.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a link associated with a task to be performed at a user device.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) spoken output in accordance with a task to be performed at the user device.

In some examples, the task is to be performed at the first electronic device, and the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) the second response at the user device.

In some examples, the processing unit 2308 is configured to cause (e.g., with the causing unit 2352) a task to be performed at the first electronic device.

In some examples, the task to be performed at the first electronic device is a continuation of a task performed remotely to the first electronic device.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) spoken output according to a task to be performed at the first electronic device.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) an affordance that enables a user to select another electronic device for performing a task.

12. Electronic device-system configuration management

In some examples, the processing unit 2308 is configured to receive (e.g., with the receiving unit 2310) voice input from a user to manage one or more system configurations of the user device. The user device is configured to provide multiple user interfaces simultaneously.

In some examples, the processing unit 2308 is configured to identify (e.g., with the identification unit 2312) context information associated with the user device; and determining (e.g., with the determining unit 2314) the user intent based on the voice input and the context information.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether the user intent is to indicate an information request or to indicate a request to perform a task.

In some examples, in accordance with a determination that the user intent indicates an information request, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a spoken response to the information request.

In some examples, in accordance with a determination that the user intent indicates a request to perform a task, processing unit 2308 is configured to instantiate (e.g., with instantiation unit 2335) a process associated with the user device to perform the task.

In some examples, processing unit 2308 is configured to display (e.g., with display unit 2322) the affordance on a display of the user device to invoke the digital assistant prior to receiving the voice input.

In some examples, processing unit 2308 is configured to invoke (e.g., with invoking unit 2320) a digital assistant service in response to receiving the predetermined phrase.

In some examples, the processing unit 2308 is configured to invoke (e.g., with the invoking unit 2320) a digital assistant service in response to receiving the selection of the affordance.

In some examples, the one or more system configurations of the user device include an audio configuration.

In some examples, the one or more system configurations of the user device include a date and time configuration.

In some examples, the one or more system configurations of the user device include a spoken configuration.

In some examples, the one or more system configurations of the user device include a display configuration.

In some examples, the one or more system configurations of the user device include an input device configuration.

In some examples, the one or more system configurations of the user device include a network configuration.

In some examples, the one or more system configurations of the user device include a notification configuration.

In some examples, the one or more system configurations of the user device include a printer configuration.

In some examples, the one or more system configurations of the user device include a security configuration.

In some examples, the one or more system configurations of the user device include a backup configuration.

In some examples, the one or more system configurations of the user device include an application configuration.

In some examples, the one or more system configurations of the user device include a user interface configuration.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) one or more actionable intents; and determining (e.g., with determining unit 2314) one or more parameters associated with the actionable intent.

In some examples, the context information includes at least one of: user-specific data, device configuration data, and sensor data.

In some examples, the processing unit 2308 is configured to determine (e.g., with the determining unit 2314) whether the user intends to change the system configuration.

In some examples, the processing unit 2308 is configured to obtain (e.g., with the obtaining unit 2354) the status of one or more system configurations from the information request; and providing (e.g., with providing unit 2318) a spoken response according to a status of the one or more system configurations.

In some examples, in accordance with a determination that the user intent indicates an information request, the processing unit 2308 is configured to display (e.g., with the display unit 2322) a first user interface that provides information in accordance with a state of one or more system configurations, in addition to providing a spoken response to the information request.

In some examples, in accordance with a determination that the user intent indicates an information request, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a link associated with the information request in addition to providing a spoken response to the information request.

In some examples, in accordance with a determination that the user intent indicates a request to perform a task, the processing unit 2308 is configured to perform (e.g., with the execution unit 2316) the task using the process.

In some examples, the processing unit 2308 is configured to provide (e.g., using the providing unit 2318) the first spoken output as a function of a result of performing the task.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a second user interface that enables a user to manipulate results of performing the task.

In some examples, the second user interface includes a link associated with a result of performing the task.

In some examples, in accordance with a determination that the user intent indicates a request to perform a task, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a third user interface that enables the user to perform the task.

In some examples, the third user interface includes a link that enables the user to perform the task.

In some examples, the processing unit 2308 is configured to provide (e.g., with the providing unit 2318) a second spoken output associated with the third user interface.

The operations described above with respect to fig. 23 are optionally implemented by components depicted in fig. 1, 2A, 4, 6A-6B, or 7A-7B. For example, receiving operation 2310, identifying operation 2312, determining operation 2314, performing operation 2316, and providing operation 2318 are optionally implemented by processor 220. Those skilled in the art will clearly know how other processes may be implemented based on the components depicted in fig. 1, 2A, 4, 6A-6B, or 7A-7B.

Those skilled in the art will understand that the functional blocks described in fig. 12 are optionally combined or separated into sub-blocks in order to implement the principles of the various described embodiments. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein. For example, processing unit 2308 may have an associated "controller" unit operatively coupled with processing unit 2308 to enable operation. This controller unit is not separately shown in fig. 23, but will be understood to be within the grasp of one of ordinary skill in the art in designing devices, such as device 2300, having processing unit 2308. As another example, in some embodiments, one or more units (such as the receiving unit 2310) may be hardware units other than the processing unit 2308. Accordingly, the description herein optionally supports combinations, subcombinations, and/or further definitions of the functional blocks described herein.

Exemplary methods, non-transitory computer-readable storage media, systems, and electronic devices are given in the following:

operating digital assistants-intelligent search and object management

1. A method for providing digital assistant services, the method comprising:

at a user device having memory and one or more processors:

receiving a first speech input from a user;

identifying context information associated with the user device;

determining a user intent based on the first speech input and the contextual information;

determining whether the user intent is to perform a task using a search process or an object management process, wherein the search process is configured to search data stored inside or outside the user device and the object management process is configured to manage objects associated with the user device;

in accordance with a determination that the user intent is to perform the task using the search process, performing the task using the search process; and

in accordance with the determination that the user intent is to perform the task using the object management process, performing the task using the object management process.

2. The method of item 1, further comprising, prior to receiving the first speech input:

displaying, on a display associated with the user device, an affordance for invoking the digital assistant service.

3. The method of item 2, further comprising:

invoking the digital assistant in response to receiving a predetermined phrase.

4. The method of any of items 2 to 3, further comprising:

invoking the digital assistant in response to receiving a selection of the affordance.

5. The method of any of items 1-4, wherein determining the user intent comprises:

determining one or more actionable intents; and

one or more parameters associated with the actionable intent are determined.

6. The method of any of items 1-5, wherein the context information comprises at least one of: user-specific data, metadata associated with one or more objects, sensor data, and user device configuration data.

7. The method of any of items 1-6, wherein determining whether the user intent is to perform the task using the search process or the object management process comprises:

Determining whether the speech input includes one or more keywords representing the search process or the object management process.

8. The method of any of items 1-7, wherein determining whether the user intent is to perform the task using the search process or the object management process comprises:

determining whether the task is associated with a search;

in accordance with a determination that the task is associated with a search, determining whether the search process is required to execute the task; and

in accordance with a determination that the task is not associated with a search, determining whether the task is associated with managing at least one object.

9. The method of item 8, wherein the task is associated with a search, the method further comprising:

upon determining that the search process is not required to perform the task,

outputting a spoken request to select the search process or the object management process, an

Receiving a second voice input from the user indicating the selection of the search process or the object management process.

10. The method of any of items 8-9, wherein the task is associated with a search, the method further comprising:

in accordance with a determination that the search process is not required to perform the task, determining whether the task is to be performed using the search process or the object management process based on a predetermined configuration.

11. The method of item 8, wherein the task is not associated with a search, the method further comprising:

in accordance with a determination that the task is not associated with managing the at least one object, performing at least one of:

determining whether the task can be performed using a fourth procedure available to the user device; and

a dialog is initiated with the user.

12. The method of any of items 1-10, wherein performing the task using the search process comprises:

at least one object is searched using the search process.

13. The method of item 12, wherein the at least one object comprises at least one of a folder or a file.

14. The method of item 13, wherein the file comprises at least one of a photograph, audio, or video.

15. The method of any of items 13 to 14, wherein the file is stored internally or externally to the user device.

16. The method of any of items 13-15, wherein searching for at least one of the folder or the file is based on metadata associated with the folder or the file.

17. The method of any of claims 12 to 16, wherein the at least one object comprises a communication.

18. The method of item 17, wherein the communication comprises at least one of an email, a message, a notification, or a voicemail.

19. The method of any of items 17 to 18, further comprising searching for metadata associated with the communication.

20. The method of any of items 12-19, wherein the at least one object comprises at least one of a contact or a calendar.

21. The method of any of claims 12 to 20, wherein the at least one object comprises an application.

22. The method of any of claims 12 to 21, wherein the at least one object comprises an online information source.

23. The method of any of items 1-22, wherein the task is associated with a search, and wherein performing the task using the object management process comprises:

searching for the at least one object using the object management process.

24. The method of item 23, wherein the at least one object comprises at least one of a folder or a file.

25. The method of item 24, wherein the file comprises at least one of a photograph, audio, or video.

26. The method of any of items 24 to 25, wherein the file is stored internally or externally to the user device.

27. The method of any of items 24-26, wherein searching for at least one of the folder or the file is based on metadata associated with the folder or the file.

28. The method of item 1, wherein performing the task using the object management process comprises instantiating the object management process, wherein instantiating the object management process comprises calling the object management process, generating a new instance of the object management process, or executing an existing instance of the object management process.

29. The method of any of items 1-28, wherein performing the task using the object management process includes creating the at least one object.

30. The method of any of items 1-29, wherein performing the task using the object management process comprises storing the at least one object.

31. The method of any of items 1-30, wherein performing the task using the object management process comprises compressing the at least one object.

32. The method of any of items 1-31, wherein performing the task using the object management process comprises moving the at least one object from a first physical or virtual storage to a second physical or virtual storage.

33. The method of any of items 1-32, wherein performing the task using the object management process comprises copying the at least one object from a first physical or virtual storage to a second physical or virtual storage.

34. The method of any of items 1-33, wherein performing the task using the object management process comprises deleting the at least one object stored in physical storage or virtual storage.

35. The method of any of items 1 to 34, wherein performing the task using the object management process comprises restoring at least one object stored in physical storage or virtual storage.

36. The method of any of items 1-35, wherein performing the task using the object management process comprises marking the at least one object, wherein the marking of the at least one object is at least one of visible or associated with metadata of the at least one object.

37. The method of any of items 1-36, wherein performing the task using the object management process comprises backing up the at least one object according to a predetermined time period for backup.

38. The method of any of items 1-37, wherein performing the task using the object management process includes sharing the at least one object between one or more electronic devices communicatively connected to the user device.

39. The method of any of items 1-38, further comprising: providing a response based on results of performing the task using the search process or the object management process.

40. The method of item 39, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

displaying a first user interface that provides the results of performing the task using the search process or the object management process.

41. The method of items 39-40, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

providing a link associated with the results of performing the task using the search process.

42. The method of items 39-41, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

Providing spoken output in accordance with the results of performing the task using the search process or the object management process.

43. The method of items 39-42, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

providing an affordance that enables the user to manipulate the results of performing the task using the search process or the object management process.

44. The method of items 39-43, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

instantiating a third process that operates using the results of executing the task.

45. The method of items 39-44, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

determining a confidence level; and

providing the response in accordance with the determination of the confidence level.

46. The method of item 45, wherein the confidence level represents the accuracy of determining the user intent based on the first speech input and contextual information associated with the user device.

47. The method of any of items 45-46, wherein the confidence level represents the accuracy of determining whether the user intends to perform the task using the search process or the object management process.

48. The method of any of items 45-47, wherein the confidence level represents the accuracy of performing the task using the search process or the object management process.

49. The method of any of items 45-48, wherein providing the response in accordance with the determination of the confidence level comprises:

determining whether the confidence level is greater than or equal to a threshold confidence level;

in accordance with a determination that the confidence level is greater than or equal to the threshold confidence level, providing a first response; and

in accordance with a determination that the confidence level is less than a threshold confidence level, a second response is provided.

50. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device,

causing the electronic device to:

receiving a first speech input from a user;

Identifying context information associated with the user device;

determining a user intent based on the first speech input and the contextual information;

determining whether the user intent is to perform a task using a search process or an object management process, wherein the search process is configured to search data stored inside or outside the user device and the object management process is configured to manage objects associated with the user device;

in accordance with a determination that the user intent is to perform the task using the search process, performing the task using the search process; and

in accordance with the determination that the user intent is to perform the task using the object management process, performing the task using the object management process.

51. An electronic device, the electronic device comprising:

one or more processors;

a memory; and

one or more programs, stored in the memory, the one or more programs including instructions for:

receiving a first speech input from a user;

identifying context information associated with the user device;

Determining a user intent based on the first speech input and the context information;

determining whether the user intent is to perform a task using a search process or an object management process, wherein the search process is configured to search data stored inside or outside the user device and the object management process is configured to manage objects associated with the user device;

in accordance with a determination that the user intent is to perform the task using the search process, performing the task using the search process; and

in accordance with the determination that the user intent is to perform the task using the object management process, performing the task using the object management process.

52. An electronic device, the electronic device comprising:

means for receiving a first speech input from a user;

means for identifying context information associated with the user equipment;

means for determining a user intent based on the first speech input and the contextual information;

means for determining whether the user intent is to perform a task using a search process or an object management process, wherein the search process is configured to search data stored inside or outside the user device and the object management process is configured to manage objects associated with the user device;

In accordance with a determination that the user intent is to perform the task using the search process, means for performing the task using the search process; and

means for performing the task using the object management process in accordance with the determination that the user intent is to perform the task using the object management process.

53. An electronic device, the electronic device comprising:

one or more processors;

a memory; and

one or more programs, stored in the memory, the one or more programs including instructions for performing the method of any of items 1-49.

54. An electronic device, the electronic device comprising:

means for performing any one of the methods of items 1-49.

55. A non-transitory computer readable storage medium comprising one or more programs for execution by one or more processors of an electronic device, the one or more programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the method of any of items 1-49.

56. A system for operating a digital assistant, the system comprising means for performing any of the methods of items 1-49.

57. An electronic device, the electronic device comprising:

a receiving unit configured to receive a first voice input from a user; and

a processing unit configured to:

identifying context information associated with the user device;

determining a user intent based on the first speech input and the context information;

determining whether the user intent is to perform a task using a search process or an object management process, wherein the search process is configured to search data stored inside or outside the user device and the object management process is configured to manage objects associated with the user device;

in accordance with a determination that the user intent is to perform the task using the search process, performing the task using the search process; and

in accordance with the determination that the user intent is to perform the task using the object management process, performing the task using the object management process.

58. The electronic device of item 57, further comprising, prior to receiving the first speech input:

Displaying, on a display associated with the user device, an affordance for invoking the digital assistant service.

59. The electronic device of item 58, further comprising:

instantiating the digital assistant in response to receiving a predetermined phrase.

60. The electronic device of any of items 58-59, further comprising:

instantiating the digital assistant in response to receiving the selection of the affordance.

61. The electronic device of any of items 57-60, wherein determining the user intent comprises:

determining one or more actionable intents; and

one or more parameters associated with the actionable intent are determined.

62. The electronic device of any of items 57-61, wherein the contextual information comprises at least one of: user-specific data, metadata associated with one or more objects, sensor data, and user device configuration data.

63. The electronic device of any of items 57-62, wherein determining whether the user intent is to perform the task using the search process or the object management process comprises:

determining whether the speech input includes one or more keywords representing the search process or the object management process.

64. The electronic device of any of items 57-63, wherein determining whether the user intent is to perform the task using the search process or the object management process comprises:

determining whether the task is associated with a search;

in accordance with a determination that the task is associated with a search, determining whether the search process is required to execute the task; and

in accordance with a determination that the task is not associated with a search, determining whether the task is associated with managing at least one object.

65. The electronic device of item 64, wherein the task is associated with a search, the electronic device further comprising:

upon determining that the search process is not required to perform the task,

outputting a voice request to select the search process or the object management process, an

Receiving a second voice input from the user indicating the selection of the search process or the object management process.

66. The electronic device of any of items 64-65, wherein the task is associated with a search, the electronic device further comprising:

in accordance with a determination that the search process is not required to perform the task, determining whether the task is to be performed using the search process or the object management process based on a predetermined configuration.

67. The electronic device of any of items 64, wherein the task is not associated with a search, the electronic device further comprising:

in accordance with a determination that the task is not associated with managing the at least one object, performing at least one of:

determining whether the task can be performed using a fourth procedure available to the user device; and

a dialog is initiated with the user.

68. The electronic device of any of items 57-66, wherein performing the task using the search process comprises:

at least one object is searched using the search process.

69. The electronic device of item 68, wherein the at least one object comprises at least one of a folder or a file.

70. The electronic device of item 69, wherein the file comprises at least one of a photograph, audio, or video.

71. The electronic device of any of items 69-70, wherein the file is stored internal or external to the user device.

72. The electronic device of any of items 69-71, wherein searching for at least one of the folder or the file is based on metadata associated with the folder or the file.

73. The electronic device of any of claims 68-72, wherein the at least one object includes a communication.

74. The electronic device of item 73, wherein the communication comprises at least one of an email, a message, a notification, or a voicemail.

75. The electronic device of any of items 73-74, further comprising searching for metadata associated with the communication.

76. The electronic device of any of items 68-75, wherein the at least one object comprises at least one of a contact or a calendar.

77. The electronic device of any of claims 68-76, wherein the at least one object includes an application.

78. The electronic device of any of claims 68-77, wherein the at least one object includes an online information source.

79. The electronic device of any of items 57-78, wherein the task is associated with a search, and wherein performing the task using the object management process comprises:

searching for the at least one object using the object management process.

80. The electronic device of item 79, wherein the at least one object comprises at least one of a folder or a file.

81. The electronic device of item 80, wherein the file comprises at least one of a photograph, audio, or video.

82. The electronic device of any of items 79-81, wherein the file is stored internal or external to the user device.

83. The electronic device of any of items 79-82, wherein searching for at least one of the folder or the file is based on metadata associated with the folder or the file.

84. The electronic device of item 57, wherein performing the task using the object management process comprises instantiating the object management process, wherein instantiating the object management process comprises invoking the object management process, generating a new instance of the object management process, or executing an existing instance of the object management process.

85. The electronic device of any of items 57-84, wherein performing the task using the object management process includes creating the at least one object.

86. The electronic device of any of items 57-85, wherein performing the task using the object management process includes storing the at least one object.

87. The electronic device of any of items 57-86, wherein performing the task using the object management process includes compressing the at least one object.

88. The electronic device of any of items 57-87, wherein performing the task using the object management process includes moving the at least one object from a first physical or virtual storage to a second physical or virtual storage.

89. The electronic device of any of items 57-88, wherein performing the task using the object management process includes copying the at least one object from a first physical or virtual storage to a second physical or virtual storage.

90. The electronic device of any of items 57-89, wherein performing the task using the object management process includes deleting the at least one object stored in physical storage or virtual storage.

91. The electronic device of any of items 57-90, wherein performing the task using the object management process includes restoring at least one object stored in physical storage or virtual storage.

92. The electronic device of any of items 57-91, wherein performing the task using the object management process includes marking the at least one object, wherein the marking of the at least one object is at least one of visible or associated with metadata of the at least one object.

93. The electronic device of any of items 57-92, wherein performing the task using the object management process includes backing up the at least one object according to a predetermined time period for backup.

94. The electronic device of any of items 57-93, wherein performing the task using the object management process includes sharing the at least one object between one or more electronic devices communicatively connected to the user device.

95. The electronic device of any of items 57-94, further comprising: providing a response based on results of performing the task using the search process or the object management process.

96. The electronic device of item 95, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

Displaying a first user interface that provides the results of performing the task using the search process or the object management process.

97. The electronic device of any of items 95-96, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

providing a link associated with the result of performing the task using the search process.

98. The electronic device of any of items 95-97, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

providing spoken output in accordance with the results of performing the task using the search process or the object management process.

99. The electronic device of any of items 95-98, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

providing an affordance that enables the user to manipulate the results of performing the task using the search process or the object management process.

100. The electronic device of any of items 95-99, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

instantiating a third process that operates using the results of executing the task.

101. The electronic device of any of items 95-100, wherein providing the response based on the results of performing the task using the search process or the object management process comprises:

determining a confidence level; and

providing the response in accordance with the determination of the confidence level.

102. The electronic device of item 101, wherein the confidence level represents the accuracy of determining the user intent based on the first speech input and contextual information associated with the user device.

103. The electronic device of any of items 101-102, wherein the confidence level represents the accuracy of determining whether the user intends to perform the task using the search process or the object management process.

104. The electronic device of any of items 101-103, wherein the confidence level represents the accuracy of performing the task using the search process or the object management process.

105. The electronic device of any of items 101-104, wherein providing the response in accordance with the determination of the confidence level comprises:

determining whether the confidence level is greater than or equal to a threshold confidence level;

in accordance with a determination that the confidence level is greater than or equal to the threshold confidence level, providing a first response; and

in accordance with a determination that the confidence level is less than a threshold confidence level, a second response is provided.

Continuity of

1. A method for providing digital assistant services, the method comprising:

at a user device having memory and one or more processors:

receiving a voice input from a user to perform a task;

identifying context information associated with the user device;

determining a user intent based on the speech input and contextual information associated with the user device;

determining, from a user intent, whether the task is to be performed at the user device or a first electronic device communicatively connected to the user device;

in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving the content for performing the task; and

In accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located from the first electronic device, providing the content for performing the task to the first electronic device.

2. The method of item 1, wherein the user device is configured to provide a plurality of user interfaces.

3. The method of any of items 1-2, wherein the user device comprises a laptop computer, a desktop computer, or a server.

4. The method of any of items 1-3, wherein the first electronic device comprises a laptop computer, a desktop computer, a server, a smartphone, a tablet, a set-top box, or a watch.

5. The method of any of items 1-4, further comprising, prior to receiving the speech input:

displaying an affordance for invoking the digital assistant on a display of the user device.

6. The method of any of clauses 5, further comprising:

instantiating the digital assistant in response to receiving a predetermined phrase.

7. The method of any of clauses 5 to 6, further comprising:

Instantiating the digital assistant in response to receiving the selection of the affordance.

8. The method of any of items 1-7, wherein determining the user intent comprises:

determining one or more actionable intents; and

one or more parameters associated with the actionable intent are determined.

9. The method of any of items 1-8, wherein the context information comprises at least one of: user-specific data, sensor data, and user device configuration data.

10. The method of any of items 1-9, wherein determining whether the task is to be performed at the user device or the first electronic device is based on one or more keywords included in the speech input.

11. The method of any of items 1-10, wherein determining whether the task is to be performed at the user device or the first electronic device comprises:

determining whether performance criteria are met for performing the task at the user device; and

in accordance with a determination that performing the task at the user device satisfies the performance criteria, determining that the task is to be performed at the user device.

12. The method of item 11, further comprising:

in accordance with a determination that performing the task at the user equipment does not satisfy the performance criteria:

determining whether performing the task at the first electronic device satisfies the performance criteria.

13. The method of item 12, further comprising:

in accordance with a determination that performing the task at the first electronic device satisfies the performance criteria, determining that the task is to be performed at the first electronic device; and

in accordance with a determination that performing the task at the first electronic device does not satisfy the performance criteria:

determining whether performing the task at the second electronic device satisfies the performance criteria.

14. The method of any of items 11-13, wherein the performance criteria is determined based on one or more user preferences.

15. The method of any of items 11 to 14, wherein the performance criteria is determined based on the device configuration data.

16. The method of any of items 11 to 15, wherein the performance criteria is dynamically updated.

17. The method of any of items 1-11 and 14-16, wherein in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving the content for performing the task comprises:

Receiving at least a portion of the content from the first electronic device, wherein the at least a portion of the content is stored in the first electronic device.

18. The method of any of items 1-11 and 14-17, wherein in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving the content for performing the task comprises:

at least a portion of the content is received from a third electronic device.

19. The method of any of items 1-16, wherein, in accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located from the first electronic device, providing the content for performing the task to the first electronic device comprises:

providing at least a portion of the content from the user device to the first electronic device, wherein at least a portion of the content is stored at the user device.

20. The method of any of items 1-16 and 19, wherein, in accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located from the first electronic device, providing the content for performing the task to the first electronic device comprises:

Causing at least a portion of the content to be provided from a fourth electronic device to the first electronic device, wherein at least a portion of the content is stored at the fourth electronic device.

21. The method of any of items 1-11 and 14-18, wherein the task is to be performed at the user device, the method further comprising providing a first response at the user device using the received content.

22. The method of item 21, wherein providing the first response at the user device comprises:

performing the task at the user device.

23. The method of item 22, wherein performing the task at the user device is a continuation of a task performed remotely, in part, at the user device.

24. The method of any of items 21 to 23, wherein providing the first response at the user device comprises:

displaying a first user interface associated with the task to be performed at the user device.

25. The method of any of items 21 to 24, wherein providing the first response at the user device comprises:

providing a link associated with the task to be performed at the user device.

26. The method of any of items 21 to 25, wherein providing the first response at the user device comprises:

providing spoken output in accordance with the task to be performed at the user device.

27. The method of any of items 1-16 and 19-20, wherein the task is to be performed at the first electronic device, the method further comprising providing a second response at the user device.

28. The method of item 27, wherein providing the second response at the user device comprises:

causing the task to be performed at the first electronic device.

29. The method of item 28, wherein the task to be performed at the first electronic device is a continuation of a task to be performed remotely at the first electronic device.

30. The method of any of items 27 to 29, wherein providing the second response at the user device comprises:

providing spoken output in accordance with the task to be performed at the first electronic device.

31. The method of any of items 27 to 30, wherein providing the second response at the user device comprises:

providing an affordance that enables the user to select another electronic device for performing the task.

32. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device,

causing the electronic device to:

receiving a voice input from a user to perform a task;

identifying context information associated with the user device;

determining a user intent based on the speech input and contextual information associated with the user device;

determining, from a user intent, whether the task is to be performed at the user device or a first electronic device communicatively connected to the user device;

in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving the content for performing the task; and

in accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located from the first electronic device, providing the content for performing the task to the first electronic device.

33. A user equipment, the user equipment comprising:

One or more processors;

a memory; and

one or more programs, stored in the memory, the one or more programs including instructions for:

receiving a voice input from a user to perform a task;

identifying context information associated with the user device;

determining a user intent based on the speech input and contextual information associated with the user device;

determining, from a user intent, whether the task is to be performed at the user device or a first electronic device communicatively connected to the user device;

in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving the content for performing the task; and

in accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located from the first electronic device, providing the content for performing the task to the first electronic device.

34. A user equipment, the user equipment comprising:

means for receiving a voice input from a user to perform a task;

Means for identifying context information associated with the user equipment;

means for determining a user intent based on a speech input and contextual information associated with the user device;

means for determining, based on a user intent, whether the task is to be performed at the user device or a first electronic device communicatively connected to the user device;

in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, means for receiving the content for performing the task; and

in accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located with respect to the first electronic device, means for providing the content for performing the task to the first electronic device.

35. A user equipment, the user equipment comprising:

one or more processors;

a memory; and

one or more programs stored in the memory, the one or more programs including instructions for performing the method of any of items 1-31.

36. A user equipment, the user equipment comprising:

means for performing any one of the methods of items 1-31.

37. A non-transitory computer readable storage medium comprising one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions, which when executed by the one or more processors, cause the electronic device to perform the method of any of items 1-31.

38. A system for operating a digital assistant, the system comprising means for performing any of the methods of items 1-31.

39. A user equipment, the user equipment comprising:

a receiving unit configured to receive a voice input from a user to perform a task; and

a processing unit configured to:

identifying context information associated with the user device;

determining a user intent based on the speech input and contextual information associated with the user device;

determining, from a user intent, whether the task is to be performed at the user device or a first electronic device communicatively connected to the user device;

In accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving the content for performing the task; and

in accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located from the first electronic device, providing the content for performing the task to the first electronic device.

40. The user device of item 39, wherein the user device is configured to provide a plurality of user interfaces.

41. The user device of any of items 39-40, wherein the user device comprises a laptop computer, a desktop computer, or a server.

42. The user device of any of items 39-41, wherein the first electronic device comprises a laptop computer, a desktop computer, a server, a smartphone, a tablet, a set-top box, or a watch.

43. The user device of any of items 39-42, further comprising, prior to receiving the speech input:

displaying an affordance for invoking the digital assistant on a display of the user device.

44. The user equipment of any of items 43, further comprising:

instantiating the digital assistant in response to receiving a predetermined phrase.

45. The user equipment of any of items 43 to 44, further comprising:

instantiating the digital assistant in response to receiving the selection of the affordance.

46. The user device of any of items 39-45, wherein determining the user intent comprises:

determining one or more actionable intents; and

one or more parameters associated with the actionable intent are determined.

47. The user equipment of any of items 39-46, wherein the context information comprises at least one of: user-specific data, sensor data, and user device configuration data.

48. The user device of any of items 39-47, wherein determining whether the task is performed at the user device or the first electronic device is based on one or more keywords included in the speech input.

49. The user device of any of items 39-48, wherein determining whether the task is to be performed at the user device or at the first electronic device comprises:

Determining whether performance criteria are met for performing the task at the user device; and

in accordance with a determination that performing the task at the user device satisfies the performance criteria, determining that the task is to be performed at the user device.

50. The user equipment of item 49, further comprising:

in accordance with a determination that performing the task at the user equipment does not satisfy the performance criteria:

determining whether performing the task at the first electronic device satisfies the performance criteria.

51. The user equipment of item 50, further comprising:

in accordance with a determination that performing the task at the first electronic device satisfies the performance criteria, determining that the task is to be performed at the first electronic device; and

in accordance with a determination that performing the task at the first electronic device does not satisfy the performance criteria:

determining whether performing the task at the second electronic device satisfies the performance criteria.

52. The user device of any of items 49-51, wherein the performance criteria is determined based on one or more user preferences.

53. The user equipment of any of items 49-52, wherein the performance criteria is determined based on the device configuration data.

54. The user equipment of any of items 49-53, wherein the performance criteria is dynamically updated.

55. The user device of any of items 39-49 and 52-54, wherein in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving the content for performing the task comprises:

receiving at least a portion of the content from the first electronic device, wherein the at least a portion of the content is stored in the first electronic device.

56. The user device of any of items 39-49 and 52-55, wherein in accordance with a determination that the task is to be performed at the user device and that content for performing the task is remotely located, receiving the content for performing the task comprises:

at least a portion of the content is received from a third electronic device.

57. The user device of any of items 39-54, wherein, in accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located with respect to the first electronic device, providing the content for performing the task to the first electronic device comprises:

Providing at least a portion of the content from the user device to the first electronic device, wherein at least a portion of the content is stored at the user device.

58. The user device of any of items 39-54 and 57, wherein, in accordance with a determination that the task is to be performed at the first electronic device and that the content for performing the task is remotely located with respect to the first electronic device, providing the content for performing the task to the first electronic device comprises:

causing at least a portion of the content to be provided from a fourth electronic device to the first electronic device, wherein at least a portion of the content is stored at the fourth electronic device.

59. The user device of any of items 39-49 and 52-56, wherein the task is to be performed at the user device, the user device further comprising providing a first response at the user device using the received content.

60. The user device of item 59, wherein providing the first response at the user device comprises:

performing the task at the user device.

61. The user device of item 60, wherein performing the task at the user device is a continuation of a task performed remotely, in part, at the user device.

62. The user device of any of items 59-61, wherein providing the first response at the user device comprises:

displaying a first user interface associated with the task to be performed at the user device.

63. The user device of any of items 59-62, wherein providing the first response at the user device comprises:

providing a link associated with the task to be performed at the user device.

64. The user device of any of items 59-63, wherein providing the first response at the user device comprises:

providing spoken output in accordance with the task to be performed at the user device.

65. The user device of any of items 39-54 and 57-58, wherein the task is to be performed at the first electronic device, the user device further comprising providing a second response at the user device.

66. The user device of item 65, wherein providing the second response at the user device comprises:

causing the task to be performed at the first electronic device.

67. The user device of item 66, wherein the task to be performed at the first electronic device is a continuation of a task performed remotely at the first electronic device.

68. The user device of any of items 65-67, wherein providing the second response at the user device comprises:

providing spoken output in accordance with the task to be performed at the first electronic device.

69. The user device of any of items 65-68, wherein providing the second response at the user device comprises:

providing an affordance that enables the user to select another electronic device for performing the task.

System configuration management

1. A method for providing digital assistant services, the method comprising:

at a user device having memory and one or more processors:

receiving voice input from a user to manage one or more system configurations of the user device, wherein the user device is configured to provide multiple user interfaces simultaneously;

identifying context information associated with the user device;

determining a user intent based on the speech input and contextual information;

determining whether the user intent indicates an information request or a request to perform a task;

in accordance with a determination that the user intent indicates an information request, providing a spoken response to the information request; and

In accordance with a determination that the user intent indicates a request to perform a task, instantiating a process associated with the user device to perform the task.

2. The method of item 1, further comprising, prior to receiving the speech input:

displaying an affordance for invoking the digital assistant on a display of the user device.

3. The method of item 2, further comprising:

instantiating the digital assistant service in response to receiving a predetermined phrase.

4. The method of item 2, further comprising:

instantiating the digital assistant service in response to receiving the selection of the affordance.

5. The method of any of items 1-4, wherein the one or more system configurations of the user device comprise an audio configuration.

6. The method of any of items 1-5, wherein the one or more system configurations of the user device comprise a date and time configuration.

7. The method of any of items 1-6, wherein the one or more system configurations of the user device comprise a spoken configuration.

8. The method of any of items 1-7, wherein the one or more system configurations of the user device comprise a display configuration.

9. The method of any of items 1-8, wherein the one or more system configurations of the user device comprise an input device configuration.

10. The method of any of items 1-9, wherein the one or more system configurations of the user device comprise a network configuration.

11. The method of any of items 1-10, wherein the one or more system configurations of the user device comprise a notification configuration.

12. The method of any of items 1-11, wherein the one or more system configurations of the user device comprise a printer configuration.

13. The method of any of items 1-12, wherein the one or more system configurations of the user device comprise a security configuration.

14. The method of any of items 1-13, wherein the one or more system configurations of the user device comprise a backup configuration.

15. The method of any of items 1-14, wherein the one or more system configurations of the user device comprise an application configuration.

16. The method of any of items 1-15, wherein the one or more system configurations of the user device comprise a user interface configuration.

17. The method of any of items 1-16, wherein determining the user intent comprises:

determining one or more actionable intents; and

one or more parameters associated with the actionable intent are determined.

18. The method of any of items 1-17, wherein the context information comprises at least one of: user-specific data, device configuration data, and sensor data.

19. The method of any of items 1-18, wherein determining whether the user intent is to indicate an information request or to indicate a request to perform a task comprises:

determining whether the user intent will change a system configuration.

20. The method of any of items 1-19, in accordance with a determination that the user intent indicates an information request, wherein providing the spoken response to the information request comprises:

obtaining one or more system configuration states according to the information request; and

providing the spoken response according to the state of one or more system configurations.

21. The method of any of items 1-20, in accordance with a determination that the user intent indicates an information request, further comprising, in addition to providing the spoken response to the information request:

Displaying a first user interface providing information according to the status of the one or more system configurations.

22. The method of any of items 1-21, in accordance with a determination that the user intent indicates an information request, further comprising, in addition to providing the spoken response to the information request:

providing a link associated with the information request.

23. The method of any of items 1-22, in accordance with a determination that the user intent indicates a request to perform a task, instantiating the process associated with the user device to perform the task includes:

the task is performed using the process.

24. The method of item 23, further comprising:

providing a first spoken output based on a result of performing the task.

25. The method of any of items 23 to 24, further comprising:

providing a second user interface that enables the user to manipulate results of performing the task.

26. The method of project 25, wherein the second user interface includes a link associated with the result of performing the task.

27. According to the method of any of items 1-19 and 23-26, in accordance with a determination that the user intent indicates a request to perform a task, instantiating the process associated with the user device to perform the task includes:

Providing a third user interface that enables the user to perform the task.

28. The method of project 27, wherein the third user interface includes a link that enables the user to perform the task.

29. The method of any of items 27-28, further comprising providing a second spoken output associated with the third user interface.

30. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device,

causing the electronic device to:

receiving voice input from a user to manage one or more system configurations of the user device, wherein the user device is configured to provide multiple user interfaces simultaneously;

identifying context information associated with the user device;

determining a user intent based on the speech input and contextual information;

determining whether the user intent indicates an information request or a request to perform a task;

in accordance with a determination that the user intent indicates an information request, providing a spoken response to the information request; and

in accordance with a determination that the user intent indicates a request to perform a task, instantiating a process associated with the user device to perform the task.

31. An electronic device, the electronic device comprising:

one or more processors;

a memory; and

one or more programs, stored in the memory, the one or more programs including instructions for:

receiving voice input from a user to manage one or more system configurations of the user device, wherein the user device is configured to provide multiple user interfaces simultaneously;

identifying context information associated with the user device;

determining a user intent based on the speech input and contextual information;

determining whether the user intent indicates an information request or a request to perform a task;

in accordance with a determination that the user intent indicates an information request, providing a spoken response to the information request; and

in accordance with a determination that the user intent indicates a request to perform a task, instantiating a process associated with the user device to perform the task.

32. An electronic device, the electronic device comprising:

means for receiving voice input from a user to manage one or more system configurations of the user device, wherein the user device is configured to provide a plurality of user interfaces simultaneously;

Means for identifying context information associated with the user equipment;

means for determining a user intent based on the speech input and contextual information;

means for determining whether the user intent indicates an information request or a request to perform a task;

in accordance with a determination that the user intent indicates an information request, means for providing a spoken response to the information request; and

in accordance with a determination that the user intent indicates a request to perform a task, means for instantiating a process associated with the user device to perform the task.

33. An electronic device, the electronic device comprising:

one or more processors;

a memory; and

one or more programs stored in the memory, the one or more programs including instructions for performing the method of any of items 1-29.

34. An electronic device, the electronic device comprising:

means for performing any one of the methods of items 1-29.

35. A non-transitory computer readable storage medium comprising one or more programs for execution by one or more processors of an electronic device, the one or more programs including instructions, which when executed by the one or more processors, cause the electronic device to perform the method of any of items 1-29.

36. A system for operating a digital assistant, the system comprising means for performing any of the methods of items 1-29.

37. An electronic device, the electronic device comprising:

a receiving unit configured to receive voice input from a user to manage one or more system configurations of the user device, wherein the user device is configured to provide a plurality of user interfaces simultaneously; and

a processing unit configured to:

identifying context information associated with the user device;

determining a user intent based on the speech input and contextual information;

determining whether the user intent indicates an information request or a request to perform a task;

in accordance with a determination that the user intent indicates an information request, providing a spoken response to the information request; and

in accordance with a determination that the user intent indicates a request to perform a task, instantiating a process associated with the user device to perform the task.

38. The electronic device of item 37, further comprising, prior to receiving the speech input:

displaying an affordance for invoking the digital assistant on a display of the user device.

39. The electronic device of item 38, further comprising:

instantiating the digital assistant service in response to receiving a predetermined phrase.

40. The electronic device of item 38, further comprising:

instantiating the digital assistant service in response to receiving the selection of the affordance.

41. The electronic device of any of items 37-40, wherein the one or more system configurations of the user device comprise an audio configuration.

42. The electronic device of any of items 37-41, wherein the one or more system configurations of the user device comprise a date and time configuration.

43. The electronic device of any of items 37-42, wherein the one or more system configurations of the user device comprise a spoken configuration.

44. The electronic device of any of items 37-43, wherein the one or more system configurations of the user device comprise a display configuration.

45. The electronic device of any of items 37-44, wherein the one or more system configurations of the user device include an input device configuration.

46. The electronic device of any of items 37-45, wherein the one or more system configurations of the user device comprise a network configuration.

47. The electronic device of any of items 37-46, wherein the one or more system configurations of the user device comprise a notification configuration.

48. The electronic device of any of items 37-47, wherein the one or more system configurations of the user device comprise a printer configuration.

49. The electronic device of any of items 37-48, wherein the one or more system configurations of the user device comprise a security configuration.

50. The electronic device of any of items 37-49, wherein the one or more system configurations of the user device comprise a backup configuration.

51. The electronic device of any of items 37-50, wherein the one or more system configurations of the user device comprise an application configuration.

52. The electronic device of any of items 37-51, wherein the one or more system configurations of the user device comprise a user interface configuration.

53. The electronic device of any of items 37-52, wherein determining the user intent comprises:

determining one or more actionable intents; and

one or more parameters associated with the actionable intent are determined.

54. The electronic device of any of items 37-53, wherein the contextual information comprises at least one of: user-specific data, device configuration data, and sensor data.

55. The electronic device of any of items 37-54, wherein determining whether the user intent indication information request or a request to perform a task is indicated comprises:

determining whether the user intent will change a system configuration.

56. The electronic device of any of items 37-55, in accordance with a determination that the user intent indicates an information request, wherein providing the spoken response to the information request comprises:

obtaining one or more system configuration states according to the information request; and

providing the spoken response according to the state of one or more system configurations.

57. The electronic device of any of items 37-56, in accordance with a determination that the user intent indicates an information request, further comprising, in addition to providing the spoken response to the information request:

displaying a first user interface providing information according to the status of the one or more system configurations.

58. The electronic device of any of items 37-57, in accordance with a determination that the user intent indicates an information request, further comprising, in addition to providing the spoken response to the information request:

Providing a link associated with the information request.

59. The electronic device of any of items 37-58, in accordance with a determination that the user intent indicates a request to perform a task, instantiating the process associated with the user device to perform the task includes:

the task is performed using the process.

60. The electronic device of item 59, further comprising:

providing a first spoken output based on a result of performing the task.

61. The electronic device of any of items 59-60, further comprising:

providing a second user interface that enables the user to manipulate results of performing the task.

62. The electronic device of project 61, wherein the second user interface includes a link associated with the result of performing the task.

63. The electronic device of any of items 37-55 and 59-62, in accordance with a determination that the user intent indicates a request to perform a task, instantiating the process associated with the user device to perform the task includes:

providing a third user interface that enables the user to perform the task.

64. The electronic device of item 63, wherein the third user interface includes a link that enables the user to perform the task.

65. The electronic device of any of items 63-64, further comprising providing a second spoken output associated with the third user interface.

The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the technology and its practical applications. The art; to thereby enable others skilled in the art to best utilize the technology and various embodiments with various modifications as are suited to the particular use contemplated.

Although the present disclosure and embodiments have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. It is to be understood that such changes and modifications are to be considered as included within the scope of the present disclosure and embodiments as defined by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:显示方法、装置和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!