Time-based word segmentation

文档序号:7102 发布日期:2021-09-17 浏览:34次 中文

1. A method, comprising:

determining, by the computing device, one or more time thresholds based on a plurality of prior user inputs using a machine learning model;

receiving, by the computing device, a second input of at least one second text character at a subsequent time after receiving the first input of the at least one first text character at the initial time;

determining, by the computing device, a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes a space character between the at least one first text character and the at least one second text character, and the first character sequence does not include the space character between the at least one first text character and the at least one second text character;

determining, by the computing device, a first score associated with the first character sequence and a second score associated with the second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence, and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence;

adjusting, by the computing device, the second score based on an amount of time between the first input and the second input using at least one of the one or more time thresholds to determine a third score associated with the second character sequence;

determining, by the computing device, whether to output an indication of the first character sequence or an indication of the second character sequence based on the first score and the third score; and

in response to determining to output the indication of the second sequence of characters, outputting, by the computing device and for display, an indication of the second sequence of characters.

2. The method of claim 1, further comprising:

updating, by the computing device, at least one of the one or more time thresholds based on the first input and the second input using the machine learning model.

3. The method of claim 1, wherein adjusting the second score comprises, in response to determining that the amount of time between the first input and the second input is greater than the at least one of the one or more time thresholds, increasing, by the computing device, the second score to determine the third score.

4. The method of any one of claims 1-3, wherein:

receiving the first input comprises detecting, by the computing device, a first selection of one or more keys of a keyboard; and

receiving the second input comprises: detecting, by the computing device, a second selection of the one or more keys of the keyboard.

5. The method of claim 4, wherein the keyboard is a graphical keyboard or a physical keyboard.

6. The method of any one of claims 1-3, wherein:

receiving the first input comprises: detecting, by the computing device, a first handwriting input at a presence-sensitive input device; and

receiving the second input comprises: detecting, by the computing device, a second handwriting input at the presence-sensitive input device.

7. The method of claim 6, further comprising:

determining, by the computing device, a first location of the presence-sensitive input device at which the first input of the at least one first text character was received based on the first handwriting input;

determining, by the computing device, a second location of the presence-sensitive input device at which the second input of the at least one second text character was received based on the second handwriting input; and

adjusting, by the computing device, the second score based on a distance between the first location and the second location to determine the third score.

8. The method of claim 7, wherein adjusting the second score comprises:

increasing, by the computing device, the second score based on the distance in response to determining that the distance satisfies a distance threshold; and

reducing, by the computing device, the second score based on the distance in response to determining that the distance does not satisfy the distance threshold.

9. The method according to any one of claims 1-3, further comprising:

in response to determining to output the indication of the first sequence of characters:

refraining from outputting an indication of the second sequence of characters; and

outputting, by the computing device and for display, the indication of the first sequence of characters.

10. A computing device, comprising:

a presence-sensitive display;

at least one processor; and

a storage device storing at least one module executable by the at least one processor to:

determining one or more time thresholds based on a plurality of prior user inputs using a machine learning model;

receiving, at an initial time, a second input of at least one second text character detected by the presence-sensitive display after receiving an indication of a first input of at least one first text character detected by the presence-sensitive display at a subsequent time;

determining a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes a space character between the at least one first text character and the at least one second text character, and the first character sequence does not include the space character between the at least one first text character and the at least one second text character;

determining a first score associated with the first character sequence and a second score associated with the second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence;

adjusting the second score based on an amount of time between the first input and the second input using at least one of the one or more time thresholds to determine a third score associated with the second character sequence;

determining whether to output an indication of the first character sequence or an indication of the second character sequence based on the first score and the third score; and

in response to determining to output the indication of the second sequence of characters, output, for display by the presence-sensitive display, an indication of the second sequence of characters.

11. The computing device of claim 10, wherein the at least one model is further executable by the at least one processor to update at least one of the one or more time thresholds based on the first input and the second input using the machine learning model.

12. The computing device of claim 10, wherein the at least one model is further executable by the at least one processor to adjust the second score by being executable by the at least one processor to: in response to determining that the amount of time between the first input and the second input is greater than the at least one of the one or more time thresholds, increasing the second score to determine the third score.

13. The computing device of any one of claims 10-12, wherein:

the first input comprises a first handwriting input;

the second input comprises a second handwriting input; and

the at least one model is further executable by the at least one processor to adjust the second score by being at least executable to:

determining, by the computing device, a first location of the presence-sensitive input device at which the first input of the at least one first text character was received based on the first handwriting input;

determining, by the computing device, a second location of the presence-sensitive input device at which the second input of the at least one second text character was received based on the second handwriting input; and

adjusting, by the computing device, the second score based on a distance between the first location and the second location to determine the third score.

14. A system comprising means for performing any of the methods recited in claims 1-9.

15. A computer-readable storage medium encoded with instructions that, when executed, cause one or more processors of a wearable device to perform any of the methods of claims 1-9.

Background

Some computing devices (e.g., mobile phones, tablet computers, etc.) may provide a graphical keyboard or handwriting input feature as part of a graphical user interface for composing text using a presence-sensitive input device, such as a trackpad or touchscreen. Such computing devices may rely on auto-completion and character recognition systems to correct spelling and grammar errors, perform word segmentation (e.g., by inserting space characters into multiple words that are used to space text input), and perform other character and word recognition techniques for assisting a user in entering typed or handwritten text. However, the capabilities of some auto-completion systems may be limited and corrections may be made that are inconsistent with the intended text of the user. Thus, the user may need to make additional effort to remove, delete, or otherwise correct the erroneous corrections.

Drawings

Fig. 1 is a conceptual diagram illustrating an example computing device configured to divide a text input into two or more words in accordance with one or more techniques of this disclosure.

Fig. 2 is a block diagram illustrating an example computing device configured to divide a text input into two or more words in accordance with one or more aspects of the present disclosure.

FIG. 3 is a conceptual diagram of an example distribution of total score increases that vary based on duration between text entry portions, according to one or more techniques of this disclosure.

Fig. 4 is a flow diagram illustrating example operations of an example computing device configured to divide a text input into two or more words, according to one or more aspects of the present disclosure.

Detailed Description

In general, this disclosure relates to techniques for dividing a text input into one or more words by applying a language model and/or a spatial model to the text input in conjunction with temporal characteristics of the text input. For example, a computing device may provide a graphical keyboard or handwriting input features as part of a graphical user interface through which a user may provide text input (e.g., a sequence of text characters) using a presence-sensitive input component of the computing device, such as a trackpad or touchscreen. As feedback that the computing device is accurately interpreting the text input, the computing device may present a graphical output generated based on the text input. Unlike the precise text character sequence that the rendering device derives from the text input, the computing device analyzes the text character sequence to determine word boundaries and spelling or grammar errors, which the computing device uses to automatically insert spaces and correct errors before rendering the graphical output at the screen.

The computing device utilizes the language model and/or the spatial model to determine with a degree of certainty or "total score" (e.g., a probability derived from the language model score and/or the spatial model score) whether the portion of the text input is intended to represent a letter, a combination of letters, or a word of a dictionary (e.g., a dictionary) of one or more individuals. If the language model and/or the spatial model indicate that a portion of the text input is likely to misspell one or more letters, combinations of letters, or words in the dictionary, the computing device may replace the misspelled portion of the received text input with one or more corrected letters, combinations of letters, or words in the dictionary. The computing device may insert a space into the text input at each word boundary identified by the language model and/or the spatial model to clearly divide the graphical output of the text input into one or more clearly identifiable words.

To improve the accuracy of the language model and/or the spatial model and to better perform word segmentation, the computing device also uses temporal characteristics of the text input to determine whether a particular text input portion represents a word break or a space between words that are not necessarily the top-ranked nouns in the dictionary. In other words, while word breaks or spaces are unlikely to occur in a particular language context, the computing device uses the language model and/or the spatial model in conjunction with the temporal characteristics of the input to determine whether the user intends to enter word breaks or "spaces" in the text input.

For example, the computing device may infer that a short delay or pause in receiving text input of two consecutive characters is an indication that the user does not intend to specify a space or word boundary in the text input, and infer that a long delay in receiving text input of two consecutive characters is an indication that the user intends to enter a space or word boundary in the text input. Thus, if the computing device detects a short delay in receiving a continuous character input, the computing device may ignore the short delay and treat the continuous character input as forming part of a single word. However, if the computing device detects a long delay in receiving the continuous character input, the computing device may increase the overall score of the word pair that includes word breaks or spaces between the continuous characters. The computing device may adjust the total score according to the duration of the delay in order to increase the likelihood that the computing device will more accurately identify word breaks or spaces based on intended pauses in the text input.

Fig. 1 is a conceptual diagram illustrating a computing device 100 as an example computing device configured to divide a text input into two or more words, according to one or more techniques of this disclosure. In the example of fig. 1, computing device 100 is a wearable computing device (e.g., a computerized watch or so-called smart watch device). However, in other examples, computing device 100 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a laptop computer, a portable gaming device, a portable media player, an e-book reader, a television platform, an automotive computing platform or system, a fitness tracker, or any other type of mobile or non-mobile computing device that receives typed or handwritten text input from a user.

Computing device 100 may include a presence-sensitive display 112. Presence-sensitive display 112 of computing device 100 may serve as an input component for computing device 100 and as an output component. Presence-sensitive display 112 is implemented using various technologies. For example, presence-sensitive display 112 may function as a presence-sensitive input device using a presence-sensitive screen, such as a resistive touch screen, a surface acoustic wave touch screen, a capacitive touch screen, a projected capacitive touch screen, a pressure-sensitive screen, an acoustic pulse recognition touch screen, a camera and display system, or another presence-sensitive screen technology. Presence-sensitive display 112 may serve as an output component, such as a display device using any one or more of a Liquid Crystal Display (LCD), dot matrix display, Light Emitting Diode (LED) display, Organic Light Emitting Diode (OLED) display, electronic ink, or similar monochrome or color display capable of outputting visual information to a user of computing device 100.

Presence-sensitive display 112 of computing device 100 may include a presence-sensitive screen that receives tactile user input from a user of computing device 100 and presents output. Presence-sensitive display 112 may receive an indication of a tactile user input by detecting one or more tap and/or non-tap gestures from a user of computing device 100 (e.g., the user touching or pointing to one or more locations of presence-sensitive display 112 with a finger or stylus), and in response to the input, computing device 100 may cause presence-sensitive display 112 to present output. Presence-sensitive display 112 may present output as part of a graphical user interface (e.g., screen shots 114A and 114B) that may be related to functions provided by computing device 100, such as receiving text input from a user. For example, presence-sensitive display 112 may present a graphical keyboard that user 118 may provide keyboard-based text input and/or handwriting input features that user 118 may provide handwritten text input.

User 118 may interact with computing device 100 by providing one or more tap or non-tap gestures at or near presence-sensitive display 112 for entering text input. When user 118 enters handwritten text input, as opposed to keyboard-based text input, the handwritten text input may be printed, handwritten, or any other form of writing or drawing. In the example of FIG. 1, user 118 writes (e.g., with a finger or stylus) a mix of printed and handwritten letters h-i-t-h-e-r-e between times t0 and t 13. FIG. 1 shows that the user 118 has written the letter h starting at time t0 and ending at time t1, has written the letter i starting at time t2 and ending at time t3, and has written the letter t starting at time t4 and ending at time t5, respectively. After the pause between times t5 and t6, FIG. 1 shows that the user 118 has written the letter h again, beginning at time t6 and ending at time t7, has written the letter e again, beginning at time t8 and ending at time t9, has written the letter r, beginning at time t10 and ending at time t11, and has written the letter e again, beginning at time t12 and ending at time t13, respectively.

The computing device 100 may include a text entry module 120 and a character recognition module 122. Modules 120 and 122 may perform operations using software, hardware, firmware, or a mixture of hardware, software, and/or firmware that reside in computing device 100 and execute on computing device 100. Computing device 100 may utilize multiple processors to execute modules 120 and 122 and/or to execute modules 120 and 122 as virtual machines executing on underlying hardware. In some examples, presence-sensitive display 112 and modules 120 and 122 may be disposed remotely from computing device 100, and computing device 100 may access presence-sensitive display and modules 120 and 122 remotely, e.g., as one or more network services accessible via a network cloud.

Text entry module 120 may manage a user interface provided by computing device 100 at presence-sensitive display 112 for handling text input from a user. For example, text entry module 120 may cause computing device 100 to present a graphical keyboard or handwriting input features as part of a graphical user interface (e.g., screenshot 114A) through which a user, such as user 118, may provide text input (e.g., a sequence of text characters) using presence-sensitive display 112. As a form of feedback that computing device 100 is accurately receiving handwritten text input at presence-sensitive display 112, text entry module 120 may cause computing device 100 to display a track or "ink" (e.g., screenshot 114A) that corresponds to a location of presence-sensitive display 112 at which the text input was received. In addition to, or in the alternative to, computing device 100 being feedback that accurately interprets text input received by presence-sensitive display 112, text entry module 120 may cause computing device 100 to present, as graphical output (e.g., screenshot 114B), individual characters inferred by computing device 100 from the text input.

When user 118 provides a tap or non-tap gesture input at presence-sensitive display 112, text entry module 120 may receive information from presence-sensitive display 112 regarding an indication of the user input detected at presence-sensitive display 112. Text entry module 120 may determine a sequence of touch events based on information received from presence-sensitive display 112. Each touch event in the sequence may include data regarding where, when, and from what direction of presence-sensitive device 112 user input was detected by presence-sensitive display 112. Text entry module 120 can invoke character recognition module 122 to process and interpret text characters associated with the text input by outputting a sequence of touch events to character recognition module 122. In response to outputting the sequence of touch events, the text entry module 120 may receive from the character recognition module 122 an indication of one or more text characters or words separated by spaces derived from the touch events by the character recognition module 122. Text entry module 120 can cause presence-sensitive display 112 to present the text characters received from character recognition module 122 as a graphical output (e.g., screenshot 114B).

Character recognition module 122 may perform character-level and/or word-level recognition operations on a sequence of touch events determined by text entry module 120 from text input provided at presence-sensitive display 112. By determining a sequence of text characters based on touch events received from text entry module 120, character recognition module 122 may perform character-level recognition of text input. Additionally, the character recognition module 122 may perform word-level recognition of the text input to determine a word sequence that includes individual characters determined from the touch events. For example, using a spatial model, character recognition module 122 may interpret a sequence of touch events as a selection of a key of a graphical keyboard presented at presence-sensitive display 112 and determine a sequence of characters of an individual corresponding to the selection of the key along with a spatial model score that indicates a likelihood that the sequence of touch events represents the selection of the key with a degree of certainty. Alternatively, using stroke recognition techniques and spatial models, character recognition module 122 may interpret the sequence of touch events into a sequence of strokes of the handwritten text input and determine individual character sequences corresponding to the sequence of strokes along with spatial model scores that indicate a likelihood that the sequence of touch events represents strokes of individual letters with a degree of certainty. Character recognition module 122 may determine a total score indicating a likelihood that the sequence of touch events represents the text input with a degree of certainty based on the spatial model score.

Rather than merely outputting to text entry module 120 a textual sequence of text characters derived by character recognition module 122 from a sequence of touch events, character recognition module 112 may perform additional analysis on the sequence of touch events to identify potential word boundaries and spelling or grammar errors associated with the text input. Character recognition module 122 may automatically insert spaces and correct potential errors in the sequence of characters derived from the touch events before outputting the text characters to text entry module 120 for presentation at presence-sensitive display 122.

In addition to the temporal characteristics of the text input, the character recognition module 122 may also divide the text input into one or more words by using aspects of the spatial model and/or the language model. In operation, after receiving a first input of at least one first text character at an initial time that is detected at presence-sensitive display 112, computing device 100 may receive a second input of at least one second text character at a subsequent time that is detected at presence-sensitive display 112. For example, between initial times t0 and t5, presence-sensitive display 112 may detect initial handwritten text input when user 118 gestures with the letter h-i-t at or near the location of presence-sensitive display 112. Between subsequent times t6 and t13, presence-sensitive display 112 may detect subsequent handwritten text input as user 118 gestures with the letter h-e-r-e at or near the location of presence-sensitive display 112. Presence-sensitive display 112 may output information to text entry module 120 indicating the location (e.g., x, y coordinate information) and time at which initial and subsequent handwritten text input was detected by presence-sensitive display 112.

Text entry module 120 may assemble the location and time of information received from presence-sensitive display 112 into a time-ordered sequence of touch events. Text entry module 120 can pass the sequence of touch events or a pointer to a location in memory of computing device 100 where the sequence of touch events is stored to character recognition module 122 for conversion into a sequence of text characters.

Using a spatial model and other stroke recognition techniques, character recognition module 122 may interpret the sequence of touch events received from text entry module 120 into a sequence of text strokes that form an individual sequence of characters. Character recognition module 122 may derive an overall score based at least in part on the spatial model scores assigned to the individual character sequences by the spatial models.

For example, the character recognition module 122 may characterize a portion of a touch event as defining different vertical strokes, horizontal strokes, curved strokes, diagonal strokes, arced strokes, and so forth. The character recognition module 122 may assign a score or rank to the potential characters that are more similar to the strokes defined by the touch event and combine each individual score (e.g., product sum, average, etc.) to determine a total score associated with the text input. The overall score or rank may indicate a degree of likelihood or confidence that one or more of the touch events correspond to a stroke or stroke combination associated with a particular text character. The character recognition module 122 can generate a sequence of characters based at least in part on the total score or rank and other factors. For example, based on the touch events associated with times t 0-t 13, the character recognition module 122 may define the sequence of characters as h-i-t-h-e-r-e.

Rather than merely outputting the sequence of characters generated from the touch event, character recognition module 122 may perform additional character and word recognition operations to more accurately determine the text characters that user 118 intends to enter at presence-sensitive display 112. The character recognition module 122 may determine, based at least in part on the at least one first text character and the at least one second text character, a first character sequence that does not include a space character between the at least one first text character and the at least one second text character and a second character sequence that includes a space character between the at least one first text character and the at least one second text character.

For example, the character recognition module 122 may input the sequence of characters h-i-t-h-e-r-e into a language model that compares the sequence of characters to individual words and phrases in a dictionary (e.g., a lexicon). When the user provides handwritten text input that character recognition module 122 recognizes as h-i-t-h-e-r-e, the language model may assign a respective language model score or rank to each word or phrase in the dictionary that may potentially represent the text input that the user intended to input at presence-sensitive display 112. Using the respective language model score for each potential word or phrase and the total score value determined from the touch event, character recognition module 122 may determine a respective "total" score for each potential word or phrase.

For example, the language model may recognize phrases of "hit here" and "hit here" as possible representations of the text character sequence. Since the phrase "hi ther" is more common than the phrase "hit her" in English, the language model may assign a language model score to the phrase "hi ther" that is higher than the language model score that the model assigns to the phrase "hit her". The character recognition module 122 may then assign a higher total score (i.e., a first score) to the phrase "hi ther" than the total score (i.e., a second score) assigned to the phrase "hit her" by the character recognition module 122. In other words, based on information stored in the dictionary and the language model, the character recognition module 122 may determine that a first character sequence "hi ther" (which does not include a space character between the letters h-i-t received between initial times t0 and t5 and the letters h-e-r-e received between subsequent times t6 and t 13) is more likely to represent handwritten text input received between times t0 and t13 than a second character sequence "hi her" (which does include a space character between the letters h-i-t received between initial times t0 and t5 and the letters h-e-r-e received between subsequent times t6 and t 13).

To improve the accuracy of text recognition techniques performed by character recognition module 122 and to better perform word segmentation, character recognition module 122 also uses temporal characteristics of text input detected at presence-sensitive display 112 to determine one or more individual words in the dictionary that are more likely to represent the text input. In particular, the character recognition module 122 uses temporal characteristics of the text input to determine whether the user 118 intends to enter word breaks or "spaces" in the text input by determining whether the user 118 pauses between entering successive characters in the sequence. The character module 122 may determine whether a sufficient duration has elapsed between receipt of an initial portion of the text input associated with ending the initial character and a subsequent portion of the text input associated with beginning of a subsequent character, with a high probability that the user intends to designate the portion of the text input between the initial and subsequent characters as a space or a word break. The character recognition module 122 may infer that the shorter delay in receiving the text input associated with two consecutive characters is an indication that the user does not intend to specify a space or word boundary in the text input, and infer that the longer delay in receiving the text input associated with two consecutive characters is an indication that the user intends to enter a space or word boundary in the text input.

Character recognition module 122 may adjust the second score (e.g., the total score associated with "hit here") based on a duration between the initial time and the subsequent time to determine a third score associated with the second character sequence. For example, even though the language model of the character recognition module 122 may determine that the phrase "hit here" is more common in English and thus has a higher language model score than the phrase "hit here," the character recognition module 122 may have a total score for the character sequence "hit here" due to the recognized pauses between times t4 and t5, i.e., after the user 118 draws the letter t and before the user 118 draws the letter h. By adjusting the total score of the character sequence "hit here" in response to the pause, character recognition module 122 may assign a score to the phrase "hit here" that is higher than the score assigned to the phrase "hi here" by character recognition module 122. In this manner, character recognition module 122 may enable computing device 100 to receive an indication of a space or word break in the text input by recognizing a pause in the text input.

In response to determining that the third score (e.g., the adjusted score associated with "hit here") exceeds the first score (e.g., the score associated with "hi here"), computing device 100 may output, for display, an indication of the second sequence of words. In other words, after character recognition module 122 adjusts the score for the character sequence "hit here" based on the temporal characteristics of the text input, character recognition module 122 may determine whether the adjusted score for the character sequence "hit here" exceeds the scores of other potential character sequences output from the language model. In the example of fig. 1, character recognition module 122 may determine that the adjusted score for "hit here" exceeds the score for "hit here" and output the sequence of characters "hit here" to text entry module 120 for presentation at presence-sensitive display 112.

The text entry module 120 may receive data indicating the character sequence "hit here" from the character recognition module 122. Text entry module 120 may use data from character recognition module 122 to generate an updated graphical user interface having characters h-i-t-h-r-re and send instructions to presence-sensitive display 122 for displaying the updated user interface (e.g., screenshot 114B).

In this manner, a computing device in accordance with the described techniques may better recognize word breaks or space predictions in text input than other systems. By using temporal characteristics of text input to reinforce language model and/or spatial model output and other components of a text input system, a computing device may improve the intuitiveness of text entry by allowing a user to more easily observe whether the computing device accurately interpreted the input. By predicting word breaks and space entries more accurately, the computing device may receive fewer inputs from the user to correct erroneous word breaks or space predictions. By receiving fewer inputs, the computing device may process fewer instructions and use less power. Thus, the computing device may receive text input more quickly and consume less battery power than other systems.

Fig. 2 is a block diagram illustrating a computing device 200 as an example computing device configured to divide text input into two or more words in accordance with one or more aspects of the present disclosure. Computing device 200 of fig. 2 is described below within the context of computing device 100 of fig. 1. In some examples, computing device 200 of fig. 2 represents an example of computing device 100 of fig. 1. Fig. 2 illustrates only one particular example of computing device 200, and many other examples of computing device 200 may be used in other instances and may include a subset of the components included in example computing device 200 or may include additional components not shown in fig. 2.

As shown in the example of fig. 2, computing device 200 includes a presence-sensitive display 212, one or more processors 240, one or more input components 242, one or more communication units 244, one or more input components 246, and one or more storage components 248. Presence-sensitive display 212 includes display component 202 and presence-sensitive input component 204.

The one or more storage components 248 of the computing device 200 are configured to store the text entry module 220 and the character recognition module 222, the character recognition module 222 further including a Temporal Model (TM) module 226, a Language Model (LM) module 224, and a Spatial Model (SM) module 228. Additionally, storage component 248 is configured to store dictionary data store 234A and threshold data store 234B. Data stores 234A and 234B may be collectively referred to herein as "data store 234".

Communication channel 250 may interconnect each of components 202, 204, 212, 220, 222, 224, 226, 228, 234, 240, 242, 244, 246, and 248 for inter-component communication (physically, communicatively, and/or operatively). In some examples, communication channel 250 may include a system bus, a network connection, an interprocess communication data structure, or any other method for transferring data.

One or more input components 242 of computing device 200 may receive input. Examples of inputs are tactile inputs, audio inputs, image inputs, and video inputs. In one example, input component 242 of computing device 200 includes a presence-sensitive display, a touch-sensitive screen, a mouse, a keyboard, a voice response system, a microphone, or any other type of device for detecting input from a human or machine. In some examples, input component 242 includes one or more sensor components, such as one or more location sensors (GPS component, Wi-Fi component, cellular component), one or more temperature sensors, one or more movement sensors (e.g., accelerometer, gyroscope), one or more pressure sensors (e.g., barometer), one or more ambient light sensors, and one or more other sensors (e.g., microphone, camera, video camera, body camera, eyewear, or other camera devices operatively coupled to computing device 200, infrared proximity sensors, hygrometers, etc.).

One or more output components 246 of the computing device 200 may generate output. Examples of outputs are tactile outputs, audio outputs, still image outputs, and video outputs. Output component 246 of computing device 200, in one example, includes a presence-sensitive display, a sound card, a video graphics adapter card, a speaker, a Cathode Ray Tube (CRT) monitor, a Liquid Crystal Display (LCD), or any other type of device for generating output to a human or machine.

The one or more communication units 244 of the computing device 200 may communicate with external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals over one or more networks. For example, the communication unit 244 may be configured to communicate over a network with a remote computing system that processes text input and performs word segmentation of the text input using temporal and language model characteristics as described herein. In response to outputting, via communication unit 244, an indication of a sequence of touch events for transmission to a remote computing system, modules 220 and/or 222 may receive, via communication unit 244, an indication of a sequence of characters from the remote computing system. Examples of communication unit 244 include a network interface card (e.g., such as an ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of the communication unit 244 may include short wave radios, cellular data radios, wireless network radios, and Universal Serial Bus (USB) controllers.

Presence-sensitive display 212 of computing device 200 includes display component 202 and presence-sensitive input component 204. Display component 202 may be a screen on which presence-sensitive display 212 displays information, and presence-sensitive input component 204 may detect objects at and/or near display component 202. As one example range, presence-sensitive input component 204 may detect an object, such as a finger or stylus, that is 2 inches or less from display component 202. Presence-sensitive input component 204 may determine the location (e.g., [ x, y ] coordinates) of display component 202 at which the object was detected. In another example range, presence-sensitive input component 204 may detect objects that are 6 inches or less from display component 202, and other ranges are possible. Presence-sensitive input component 204 may determine the location of display component 202 selected by the user's finger by using capacitive recognition techniques, inductive recognition techniques, and/or optical recognition techniques. In some examples, presence-sensitive input component 204 also provides output to the user through the use of tactile stimuli, audio stimuli, or video stimuli as described with respect to display component 202. In the example of fig. 2, presence-sensitive display 212 may present a user interface (such as a graphical user interface for receiving text input and outputting character sequences inferred from the text input, as shown in screen shots 114A and 114B in fig. 1).

Although presence-sensitive display 212 is illustrated as an internal component of computing device 200, presence-sensitive display 212 may also represent an external component that shares a data path with computing device 200 for transmitting and/or receiving input and output. For example, in one example, presence-sensitive display 212 represents a built-in component of computing device 200 (e.g., a screen on a mobile phone) that is located within and physically connected to an external enclosure of computing device 200. In another example, presence-sensitive display 212 represents an external component of computing device 200 that is located outside of a package or housing of computing device 200 and is physically separate from the package or housing of computing device 200 (e.g., a monitor, projector, etc. that shares a wired and/or wireless data path with computing device 200).

Presence-sensitive display 212 of computing device 200 may receive tactile input from a user of computing device 200. Presence-sensitive display 212 may receive indications of tactile input by detecting one or more tap or non-tap gestures from a user of computing device 200 (e.g., the user touching or pointing to one or more locations of presence-sensitive display 212 with a finger or stylus). Presence-sensitive display 212 may present output to a user. Presence-sensitive display 212 may present the output as a graphical user interface (e.g., as screen shots 114A and 114B of fig. 1) that may be associated with the functionality provided by the various functions of computing device 200. For example, presence-sensitive display 212 may present various user interfaces for components of a computing platform, operating system, application, or service (e.g., an electronic messaging application, a navigation application, an internet browser application, a mobile operating system, etc.) executing at computing device 200 or otherwise accessible to computing device 200. A user may interact with a corresponding user interface to cause computing device 200 to perform operations relating to one or more different functions. For example, text entry module 220 may cause presence-sensitive display 212 to present a graphical user interface associated with text input functionality of computing device 200. A user of computing device 200 may view output presented as feedback associated with text input functions and provide input to presence-sensitive display 212 to compose additional text by using the text input functions.

Presence-sensitive display 212 of computing device 200 may detect two-dimensional and/or three-dimensional gestures as input from a user of computing device 200. For example, a sensor of presence-sensitive display 212 may detect movement of a user (e.g., moving a hand, arm, pen, stylus, etc.) within a threshold distance of the sensor of presence-sensitive display 212. Presence-sensitive display 212 may determine a two-dimensional or three-dimensional vector representation of the movement and associate the vector representation with a gesture input having multiple dimensions (e.g., waving hands, pinching, tapping (clap), stroking, etc.). In other words, presence-sensitive display 212 may detect multi-dimensional gestures without requiring a user to gesture at or near a screen or surface on which presence-sensitive display 212 outputs information for display. Rather, presence-sensitive display 212 may detect multi-dimensional gestures performed at or near a sensor that may or may not be located near a screen or surface on which presence-sensitive display 212 outputs information for display.

The one or more processors 240 may implement functions and/or execute instructions associated with the computing device 200. Examples of processor 240 include an application processor, a display controller, an auxiliary processor, one or more sensor hubs, and any other hardware configured to act as a processor, processing unit, or processing device. Modules 220, 222, 224, 226, and 228 are operable by processor 240 to perform various actions, operations, or functions of computing device 200. For example, processor 240 of computing device 200 may retrieve and execute instructions stored by storage component 248 that cause processor 240 to execute operational modules 220, 222, 224, 226, and 228. The instructions, when executed by the processor 240, may cause the computing device 200 to store information within the storage component 248.

One or more storage components 248 within computing device 200 may store information for processing during operation of computing device 200 (e.g., computing device 200 may store data accessed by modules 220, 222, 224, 226, and 228 during execution at computing device 200). In some examples, storage component 248 is a temporary memory, meaning that the primary purpose of storage component 248 is not long-term storage. The storage component 248 on the computing device 220 may be configured as a volatile memory for short-term storage of information and, therefore, will not retain stored content if the storage component is powered down. Examples of volatile memory include Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), and other forms of volatile memory known in the art.

In some examples, storage component 248 also includes one or more computer-readable storage media. In some examples, storage component 248 includes one or more non-transitory computer-readable storage media. Storage component 248 may be configured to store larger amounts of information than is typically stored by volatile memory. Storage component 248 may be further configured as a non-volatile memory space for long term storage of information and to retain information after power up/down cycles. Examples of non-volatile memory include magnetic hard disks, optical disks, floppy disks, flash memory, or forms of electrically programmable memory (EPROM) or Electrically Erasable and Programmable (EEPROM) memory. Storage component 248 may store program instructions and/or information (e.g., data) associated with modules 220, 222, 224226, and 228 and data store 234. Storage component 248 may include a memory configured to store data or other information associated with modules 220, 222, 224226, and 228, and data store 234.

Text entry module 220 may include all of the functionality of text entry module 120 of computing device 100 of fig. 1 and may perform similar operations as text entry module 120 for managing a user interface provided by computing device 200 at presence-sensitive display 212 for handling text input from a user. The text entry module 220 can send information through a communication channel 250 that causes the display component 202 of the presence-sensitive display 212 to present a graphical keyboard or handwriting input feature as part of a graphical user interface (e.g., screenshot 114A) of a user, such as user 118, that can provide text input (e.g., a sequence of text characters) by providing tap and non-tap gestures at the presence-sensitive input component 204. Text entry module 220 may cause display component 202 to present a trace or correspond to a location of presence-sensitive input component 204 at which the text input was received (e.g., screenshot 114A), and may also cause display component 202 to display individual characters inferred from the text input by character recognition module 222 as a graphical output (e.g., screenshot 114B).

Character recognition module 222 may include all of the functionality of modules 122 of computing device 100 of fig. 1 and may perform operations similar to character recognition module 122 to perform character-level and/or word-level recognition operations on a sequence of touch events determined by text entry module 220 from text input provided at presence-sensitive display 212. The character recognition module 22 performs character-level and/or word-level recognition operations on the touch events by using the SM module 228, the LM module 224, and the TM module 226.

Threshold data store 234A may include one or more temporal, distance, or space-based thresholds, probability thresholds, or other comparison values that character recognition module 222 uses to infer characters from text input. The threshold stored at threshold data store 234B can be a variable threshold (e.g., based on a function or a lookup table) or a fixed value. For example, threshold data store 234A may include a first time threshold (e.g., 400 milliseconds) and a second time threshold (e.g., 1 second). The character recognition module 222 may compare the duration of the pause between successive character inputs to each of the first and second thresholds. If the duration of the pause satisfies a first threshold (e.g., greater than 400 milliseconds), the character recognition module 222 may increase the probability or score of the character sequence including the word break or space corresponding to the pause by a first amount. If the duration of the pause satisfies a second threshold (e.g., greater than 1 second), the character recognition module 222 may increase the probability or score of the character sequence including the word break or space corresponding to the pause by a second amount that exceeds the first amount. If the duration of the pause does not satisfy either the first threshold or the second threshold (e.g., less than 400 seconds), the character recognition module 222 may reduce the probability or score of the character sequence including the word break or space corresponding to the pause.

In some examples, the threshold 234B stored at the threshold data store may be a variable threshold and may change dynamically over time. For example, based on previous inputs, character recognition module 222 may intelligently learn (e.g., by using a machine learning system) characteristics of typical inputs from user 118 and modify the threshold stored at threshold data store 234B according to the learned characteristics of user 118. For example, the character recognition module 222 may determine the threshold value stored at the data store based on the amount of time the user 118 typically spends between entering different letters, words, and phrases and may determine the threshold value stored at the data store based on the amount of time the user 118 typically spends between entering different letters of the same word.

In some examples, the number of probabilities or scores that the character recognition module increases or decreases the character sequence may be determined as one or more functions of the pause duration. For example, character recognition module 222 may determine a first number of scores to increase the character string from the first data set (e.g., based on a first function of duration or from a first lookup table of values). The character recognition module 222 may determine a second number of scores to increase the string from the second data set (e.g., based on a second function of duration or from a second lookup table of values). As explained in more detail with respect to fig. 3, the first data set and the second data set may represent two disjoint data sets that are separated by at least an order of magnitude. In some examples, the order of magnitude may be a factor (e.g., 10) or an offset (e.g., a fixed amount). For example, if the pause duration is greater than or equal to 1 second, character recognition module 222 may increase the score by a greater amount than the increase amount character recognition module 222 applies for pauses lasting less than 1 second.

The SM module 228 can receive the sequence of touch events as input and output a character or sequence of characters more likely to represent the sequence of touch events, along with a degree of certainty or spatial model score that indicates the likelihood or accuracy that the sequence of characters defines a touch event. In other words, the SM module 228 may perform handwriting recognition techniques to infer touch events as strokes and strokes as characters and/or to infer touch events as selections or gestures at keys of a keyboard and selections or gestures of keys as characters of a word. The character recognition module 122 can use the spatial model scores output from the SM module 228 in determining the total score for the one or more potential words output by the module 122 in response to the text input.

The LM module may receive a sequence of characters as input and output one or more candidate words or word pairs as a sequence of characters that the LM module identifies from dictionary data store 234A as potential replacements for the sequence of characters in a language context (e.g., a sentence in written language). For example, language model 224 may assign language model probabilities to one or more candidate words or word pairs located at dictionary data store 234A that include at least some of the same characters as the input sequence of characters. The language model probability assigned to each of the one or more candidate words or word pairs indicates: candidate words or word pairs are typically found to be positioned to a degree of certainty or likelihood after, before, and/or within a sequence of words (e.g., a sentence) generated from text input detected by presence-sensitive input component 204 before and/or after receipt of a current sequence of characters analyzed by LM module 224.

Dictionary data store 234A may include one or more ordered databases (e.g., hash tables, linked lists, ordered arrays, charts, etc.) that represent dictionaries of one or more written languages. Each dictionary may include a list of words and phrases within a written language vocabulary (e.g., including grammars, slang, and spoken word usage). LM module 224 of character recognition module 222 may perform a lookup of a sequence of characters in dictionary data store 234A by comparing portions of the sequence to each word in dictionary data store 234A. LM module 224 may assign a similarity coefficient (e.g., a Jaccard similarity coefficient) to each word in dictionary data store 234A based on the comparison and determine the candidate word or candidate words having the greatest similarity coefficient from dictionary data store 234A. In other words, the candidate word or candidate words having the greatest similarity coefficient may first represent the potential word in dictionary data store 234A having a spelling most closely associated with the spelling of the sequence of characters. LM module 224 may determine one or more candidate words that include some or all of the characters in the character sequence and determine that the one or more candidate words having the highest similarity coefficients represent a potentially corrected spelling of the character sequence. In some examples, the candidate word with the highest similarity coefficient matches a sequence of characters generated from the sequence of touch events. For example, candidate words for the character sequence h-i-t-h-e-r-e may include "hi", "hit", "here", "hi ther", and "hit her".

The LM module 224 may be an n-gram language model. The n-gram language model may provide a probability distribution for a term xi (letter or word) in a sequence of consecutive terms based on a previous term in the sequence (i.e., P (xi | xi- (n-1)),..,. xi-1)) or for a term xi in a sequence of consecutive terms based on a subsequent term in the sequence (i.e., P (xi | xi +1,..,. xi + (n-1))). Similarly, an n-gram language model may provide a probability distribution for a term xi in a sequence of consecutive terms based on a previous term in the sequence and a subsequent term in the sequence (i.e., P (xi | xi- (n-1))). For example, a bigram language model (an n-gram model, where n ═ 2) may provide a first probability that the word "here" follows the word "hi" in a sequence (i.e., sentence) and a different probability that the word "here" follows the word "hit" in a different sentence. A trigram language model (n-gram model, where n ═ 3) may provide the probability that the word "her" takes over the two words "hey over" in the sequence.

In response to receiving the sequence of characters, language model 24 may output one or more words and word pairs from dictionary data store 234A having the highest similarity coefficients and highest language model scores for the sequence. The character recognition module 222 may perform further operations to determine which highest ranked noun or word pair is to be output to the text entry module 220 as the sequence of characters that best represents the sequence of touch events received from the text entry module 220. The character recognition module 222 may combine the language model scores output from the LM module 224 with the spatial model scores output from the SM module 228 to derive a total score that indicates each highest-ranked noun or word pair in the touch event sequence representation dictionary data store 234A defined by the text input.

To improve the word segmentation capabilities of character recognition module 222 and to detect word breaks or spaces in the text input, TM module 226 may further analyze the touch events received from text entry module 220 on behalf of character recognition module 222 and adjust, if necessary, the total corresponding scores associated with the one or more candidate words output from LM module 224. TM module 226 may determine the start and end time components associated with each character in the sequence of characters that character recognition module 222 infers from the sequence of touch events received from text entry module 222. Based on the start and end time components associated with each character in the sequence of characters, TM module 226 may determine the duration that elapses after user 118 completes a character until user 118 begins a subsequent character. TM module 226 may determine that a longer duration between consecutive characters indicates an intended word break or space in the text input and a shorter duration indicates no intended word break or space in the text input.

TM module 226 may promote (boost) the total score of a phrase having a space or word break at a location corresponding to a longer pause in the text input. In some examples, TM module 226 may promote an overall score for a phrase that has no spaces or word breaks at locations corresponding to shorter pauses in the text input.

In addition to spatial, temporal, and language model features, the character recognition module 222 may also rely on other characteristics of the text input to infer intended characters of the text input. For example, the character recognition module 222 may rely on other spatial or distance characteristics of the text input to determine character sequences that are more likely to represent the text input. Character recognition module 222 may infer that user 118 may wish to insert a word break or space between two consecutive characters in the text input when presence-sensitive input component 204 detects that the locations of presence-sensitive input component 204 at which the two consecutive characters associated with the text input are separated by a greater amount of distance.

For example, in response to determining that a distance between characters bounding a space or a word break satisfies a distance threshold, the character recognition module 222 may increase a score of a sequence of characters that includes a space or a word break based on a distance between two consecutive portions of the text input. Conversely, in response to determining that the distance between the characters bounding a space or a word break does not satisfy the distance threshold, the character recognition module 222 may decrease the score of the sequence of characters that includes a space or a word break based on the distance between two consecutive portions of the text input.

In this manner, a computing device operating in accordance with the described techniques may predict where to insert a space into a text input depending on temporal information, language models, and spatial information associated with the text input. Any other combination of temporal, linguistic, and spatial information may also be used, including a machine learning function of a measure of the two portions of the input (before or after a potential space).

In some examples, the computing device may use a weighted combination of temporal and language models and spatial distances, and in some examples, the computing device may use a time-based boost. In other words, if the computing device determines that the user has waited more than a certain amount of time between writing two consecutive characters or groups of characters, it is determined that the word is likely to have ended before the pause. The computing device may compare the duration of the pause to a fixed threshold and add a relatively large addition to the language model score of the character sequence that includes a space at that point.

In some examples, the computing device may automatically tune the temporal model, the language model, and the weights of the special signals by using Minimum Error Rate Training (MERT). Using MERT, the computing device may automatically adjust parameters to minimize the error rate on the set of tuning samples. Likewise, the computing device may collect training samples of users writing multiple words on a particular device (e.g., phone, watch, tablet computer, etc.) that needs to be tuned. In other examples, the computing device may collect training samples from an external data set (e.g., during training of the entire system or service on which the computing device depends and is separate from it).

In some examples, the computing device may remove previously written strokes or previously output ink for display when a time threshold related to lifting of the pause has elapsed. Likewise, the computing device may provide further indication of character recognition finalization from the text input (e.g., in the context of a scrolling handwriting pane, previously written strokes may move out of view, so the user will immediately become aware that writing new content will start a new word).

In some examples, computing devices may utilize a sliding or continuous gesture keyboard to perform similar character recognition techniques. That is, while the gesture is entered, the computing device may infer the end of the word when the user stops the gesture. However, in some examples, the computing device may ignore the stop or end of a word, for example, if the break between the two gestures is very short. A particular advantage of this technique may precede the provision of continuous gestures for certain languages that allow long compound words (e.g., german) by using a gesture keyboard.

In other words, while a gesture is being entered, some computing systems may insert a space after the gesture is completed. For languages like german in the case of compound words, inserting a space after the end of each gesture can sometimes result in too many spaces in the text input. A computing device according to the described techniques may avoid inserting a space between two consecutive gesture inputs if the time between the two gestures is very short. In some examples, for languages with few spaces, the computing device may default to avoiding inserting spaces after a gesture, and only insert spaces if a sufficiently long pause occurs between consecutive gestures.

FIG. 3 is a conceptual diagram of a graph 300 as an example distribution of total score increases that vary based on duration between text input portions, according to one or more techniques of this disclosure. For purposes of illustration, fig. 3 is described below within the context of computing device 100 of fig. 1.

Graph 300 is comprised of data set 310A and data set 310B. Both data sets 310A and 310B represent an increase in total score as a function of time, where time corresponds to the duration of a pause between successive characters of the text input. Data sets 310A and 310B are two disjoint data sets that are at least one order of magnitude apart (denoted by "lifting"). In some examples, the order of magnitude may be a factor (e.g., 10) or an offset (e.g., a fixed amount). In some examples, the order of magnitude may be such that the increase defined by the data set 310B is sufficiently high that the resulting total score of the candidate string is at least approximately equal to 100%.

The character recognition module 122 may rely on functions representing the data sets 310A and 310B to calculate an increase in the total score of a sequence of characters having a space or word break corresponding to a pause in the text input. For example, the character recognition module 122 may identify a pause in the sequence of touch events associated with the text input received by the computing device 100 between times t5 and t 6.

In response to determining that the duration between times t5 and t6 satisfies the first level threshold, the character recognition module 122 may increase the total score of the character sequence "hit here" by a first amount corresponding to the amount at point 312A in the graph 300 based on the duration between times t5 and t 6. In response to determining that the duration between times t5 and t6 satisfies the second level threshold, the character recognition module 112 may increase the total score of the string sequence "hit here" by a second amount corresponding to the amount at point 312B in the graph 300 based on the duration between times t5 and t 6.

As shown in FIG. 3, the first quantity at point 312A is from a first set of data corresponding to data set 310A, and the second quantity at point 312B is from a second set of quantities corresponding to order of magnitude 310B. Data sets 310A and 310B are two disjoint data sets that are at least one order of magnitude apart (denoted by "lifting"). In this manner, if the duration associated with the pause satisfies a first time threshold (e.g., 400 milliseconds), the character recognition module 122 may make it more likely that a sequence of characters derived from the text input that has a pause between consecutive characters will include a space between consecutive characters. Additionally, if the duration associated with the pause satisfies a second time threshold (e.g., 1 second), the character recognition module 122 may cause the sequence of characters derived from the text input that have pauses between consecutive characters to most certainly include spaces between consecutive characters.

Fig. 4 is a flow diagram illustrating example operations performed by an example computing device configured to divide text input into two or more words according to one or more aspects of the present disclosure. The process of fig. 4 may be performed by one or more processors of a computing device, such as computing device 100 of fig. 1 and/or computing device 200 of fig. 2. In some examples, the steps of the process of fig. 4 may be repeated, omitted, and/or performed in any order. For purposes of illustration, fig. 4 is described below within the context of computing device 100 of fig. 1.

In the example of fig. 4, computing device 100 may receive (400) a first input of at least one first text character at an initial time and computing device 100 may receive (410) a second input of at least one second text character at a subsequent time. For example, presence-sensitive display 112 may detect that user 118 provided initial text input when user 118 gestures to draw or write the letter h-i-t at or near presence-sensitive display 112 between times t0 and t 5. Presence-sensitive display 112 may detect that user 118 provided subsequent text input when user 118 gestures to draw or write the letter h-e-r-e at or near presence-sensitive display 112 between times t6 and t 13.

Computing device 100 may determine (420) a first score for a first sequence of characters that does not include a space character between at least one first character and at least one second character. The computing device may determine (430) a second score for a second character sequence that includes a space character between the at least one first character and the at least one second character. For example, text entry module 120 may process the initial text input and the subsequent text input into a sequence of touch events that defines the time and location at which presence-sensitive display 112 detected that user 118 drawn the letter h-i-t-h-e-r-e. The spatial model of character recognition module 122 can generate a sequence of characters based on the sequence of touch events along with the scores associated with the touch events and input the sequence of characters into a language model. The language model of the character recognition module 122 may output the two candidate strings "hi ther" and "hit her" as potential candidate strings that the user 118 intends to enter. The character recognition module 112 may assign a first score to the candidate string "hi ther" and may assign a second score to the candidate string "hi her". The first score may be based on at least one of a first language model score or a first spatial model score associated with the first character sequence, and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence.

Computing device 100 adjusts (440) the second score based on a duration of time between the initial time and the subsequent time to determine a third score for the second string. For example, the character recognition module 122 may compare the amount of time between time t5 (at the time the user 118 completed entering the initial text entry associated with the letters h-i-t) and time t6 (at the time the user 118 began entering the subsequent text entry associated with the letters h-e-r-e) to one or more time thresholds indicating an intended word break or a space in the text entry. If the pause between times t5 and t6 satisfies one or more time thresholds indicating an intended word break or a space in the text input, the character recognition module 122 may increase the second score to determine a third score for the candidate character string "hit here".

Computing device 100 may determine (450) whether the third score exceeds the first score. For example, after adjusting the score based on the temporal characteristics of the text input, character recognition module 122 may output the candidate character string with the highest or largest score to text entry module 120.

If, after adjusting for the pause, the third score exceeds the first score, computing device 100 may output (460) an indication of the second sequence of characters. For example, character recognition module 122 may output the string "hit here" to text entry module 120, such that text entry module 120 may cause presence-sensitive display 112 to display the phrase "hit here" (e.g., as screenshot 114B).

However, if the third score does not exceed the first score after adjusting for the pause, then computing device 100 may refrain (470) from outputting an indication of the second character sequence and instead output an indication of the first character sequence. For example, character recognition module 122 may output the character string "hi ther" to text entry module 120 such that text entry module 120 may cause presence-sensitive display 112 to display the more common phrase "hi ther" regardless of the pause between times t5 and t 6.

Clause 1. a method, comprising: receiving, by the computing device, a second input of at least one second text character at a subsequent time after receiving the first input of the at least one first text character at the initial time; determining, by the computing device, a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes a space character between the at least one first text character and the at least one second text character, and the first character sequence does not include a space character between the at least one first text character and the at least one second text character; determining, by a computing device, a first score associated with a first character sequence and a second score associated with a second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence, and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence; adjusting, by the computing device, the second score based on a duration of time between the initial time and a subsequent time to determine a third score associated with the second character sequence; and outputting, by the computing device and for display, an indication of the second character sequence in response to determining that the third score exceeds the first score.

Clause 2. the method of clause 1, wherein adjusting the second score comprises: increasing, by the computing device, the second score based on the duration to determine a third score.

Clause 3. the method of clause 2, wherein increasing the second score comprises: increasing, by the computing device, the second score by the first amount based on the duration in response to determining that the duration satisfies the first level threshold; and increasing, by the computing device, the second score by a second amount based on the duration in response to determining that the duration satisfies the second level threshold.

Clause 4. the method of clause 3, wherein increasing the second score further comprises: determining, by the computing device, a first quantity from the first data set; and determining, by the computing device, a second quantity from a second data set, wherein the first data set and the second data set are two disjoint data sets that are separated by at least an order of magnitude.

Clause 5. the method of any one of clauses 1-4, wherein: receiving the first input includes: detecting, by a computing device, a first selection of one or more keys of a keyboard; and receiving the second input comprises: a second selection of one or more keys of the keyboard is detected by the computing device.

Clause 6. the method of clause 5, wherein the keyboard is a graphical keyboard or a physical keyboard.

Clause 7. the method of any one of clauses 1-7, wherein: receiving the first input includes: detecting, by a computing device, a first handwriting input at a presence-sensitive input device; and receiving the second input comprises: detecting, by the computing device, a second handwriting input at the presence-sensitive input device.

Clause 8. the method of clause 7, further comprising: determining, by the computing device and based on the first handwriting input, a first location of the presence-sensitive input device at which the first input of the at least one first text character was received; determining, by the computing device and based on the second handwriting input, a second location of the presence-sensitive input device at which a second input of at least one second text character was received; and adjusting, by the computing device, the score based on the distance between the first location and the second location to determine a third score.

Clause 9. the method of clause 8, wherein adjusting the second score comprises: increasing, by the computing device, the second score based on the distance in response to determining that the distance satisfies the distance threshold; and decreasing, by the computing device, the second score based on the distance in response to determining that the distance does not satisfy the distance threshold.

Clause 10. the method of any of clauses 1-10, further comprising: in response to determining that the first score exceeds the third score: refraining from outputting an indication of a second sequence of characters; and outputting, by the computing device and for display, an indication of the first sequence of characters.

Clause 11. a computing device, comprising: a presence-sensitive display; at least one processor; and at least one module operable by the at least one processor to: receiving a first input of at least one first text character detected by a presence-sensitive display at an initial time, and receiving a second input of at least one second text character detected by the presence-sensitive display at a subsequent time; determining a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes a space character between the at least one first text character and the at least one second text character, and the first character sequence does not include a space character between the at least one first text character and the at least one second text character; determining a first score associated with a first character sequence and a second score associated with a second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence; adjusting the second score based on a duration of time between the initial time and a subsequent time to determine a third score associated with the second character sequence; and in response to determining that the third score exceeds the first score, outputting, for display at the presence-sensitive display, an indication of the second character sequence.

Clause 12. the computing device of clause 11, wherein the at least one module is further operable by the at least one processor to adjust the second score by at least: the second score is increased based on the duration to determine a third score.

The computing device of clause 13. the computing device of clause 12, wherein the at least one module is further operable by the at least one processor to increase the second score by at least: in response to determining that the duration of time satisfies the first level threshold, increasing the second score by a first amount based on the duration of time; and in response to determining that the duration of time satisfies the second level threshold, increasing the second score by a second amount based on the duration of time.

Clause 14. the computing device of clause 13, wherein the at least one module is further operable by the at least one processor to increase the score by at least: determining a first quantity from the first data set; and determining the second quantity from a second data set, wherein the first data set and the second data set are two disjoint data sets separated by at least an order of magnitude.

Clause 15. the computing device of any of clauses 11-14, wherein the at least one module is further operable by the at least one processor to: receiving a first input at least by detecting a first handwriting input at a presence-sensitive display; and receiving a second input at least by detecting the second handwriting input at the presence-sensitive display.

Clause 16. a computer-readable storage medium comprising instructions that, when executed by at least one processor of a computing device, cause the at least one processor to: receiving a first input of at least one first text character at an initial time, and receiving a second input of at least one second text character at a subsequent time; determining a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes a space character between the at least one first text character and the at least one second text character, and the first character sequence does not include a space character between the at least one first text character and the at least one second text character; determining a first score associated with a first character sequence and a second score associated with a second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence; adjusting the second score based on a duration of time between the initial time and a subsequent time to determine a third score associated with the second character sequence; and in response to determining that the third score exceeds the first language model score, outputting for display an indication of the second character sequence.

Clause 17. the computer-readable storage medium of clause 16, which includes additional instructions, which when executed by at least one processor of the computing device, cause the at least one processor to adjust the second score by at least: the second score is increased based on the duration to determine a third score.

Clause 18. the computer-readable storage medium of clause 17, which includes additional instructions that, when executed by at least one processor of the computing device, cause the at least one processor to increase the second score by at least: in response to determining that the duration of time satisfies the first level threshold, increasing the second score by a first amount based on the duration of time; and in response to determining that the duration of time satisfies the second level threshold, increasing the second score by a second amount based on the duration of time.

Clause 19. the computer-readable storage medium of clause 18, which includes additional instructions that, when executed by at least one processor of the computing device, cause the at least one processor to increase the second score by at least: determining a first quantity from the first data set; and determining the second quantity from a second data set, wherein the first data set and the second data set are two disjoint data sets separated by at least an order of magnitude.

Clause 20. the computer-readable storage medium of any of clauses 16 to 19, comprising additional instructions that, when executed by at least one processor of a computing device, cause the at least one processor to: receiving a first input at least by detecting a first handwriting input at a presence-sensitive input device; and receiving a second input at least by detecting the second handwriting input at the presence-sensitive input device.

Clause 21. a system comprising means for performing any of the methods of clauses 1-10.

Clause 22. a computing device comprising means for performing any of the methods of clauses 1-10.

Clause 23. the computing device of clause 11, further comprising means for performing any of the methods of clauses 1-10.

In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media or communication media corresponding to tangible media, such as data storage media, including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, the computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium or (2) a communication medium such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product may include a computer-readable medium.

By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures or that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but rather refer to non-transitory, tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware modules and/or software modules. Also, the techniques may be fully implemented in one or more circuits or logic elements.

The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handheld device, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). In this disclosure, various components, modules, or units are described to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require implementation by different hardware units. Rather, as described above, the various elements may be combined in hardware elements or may be provided by a combination of interoperable hardware elements, including one or more processors as described above, in conjunction with suitable software and/or firmware.

Various examples have been described. These and other examples are within the scope of the following claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:智能设备的控制方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类