UI data processing system

文档序号:7294 发布日期:2021-09-17 浏览:42次 中文

1. A UI data processing system, characterized in that,

the method comprises a native real-time interactive processing program process, a processor and a memory storing a computer program, wherein the process comprises a plurality of user information data structures, each user information data structure is independently stored in the process, the user information data structures comprise a user id data section, a current user input data section, a UI container id data section, a rendering camera id data section and a current input device state data section, and when the processor executes the computer program, the following steps are realized:

step S1, receiving input data sent by a client, determining a current input event based on the input data, and updating the current user input data segment, wherein the input data comprises a target user id, input equipment state information, operation execution information and operation position information;

step S2, retrieving the process based on the target user id to determine a target UI container and a target rendering camera;

step S3, traversing the target UI container according to the operation position information to determine a target UI;

step S4, executing corresponding UI event based on the current input event, the target UI and the current input device state data, generating target UI display data, and updating the current input device state data segment and the current UI event data segment;

and step S5, generating target rendering data on the target rendering camera based on the target UI display data, and sending the target rendering data to a corresponding client.

2. The system of claim 1,

the input equipment state information comprises a default state, a pressing state and a lifting state; the operation execution information comprises operation instruction information and operation value information, the operation instruction information comprises movement, roller rolling and designated keys, and the operation value information comprises a roller rolling value; the current user input data segment comprises a current input equipment roller value, current input equipment operation position information and a key chain table which is pressed by the input equipment currently and is not lifted.

3. The system of claim 2,

in step S1, determining a current input event based on the input data, and updating the current user input data segment,

step S11, determining a current input event based on the device state information and the operation instruction information currently sent by the client, wherein the input event comprises an input device roller rolling event, an input device moving event, an input device key pressing event and an input device key lifting event;

step S12, updating the key chain table that is currently pressed and not lifted based on the pressing state and the lifting state of the input device currently sent by the client, updating the roller value of the current input device based on the operation value information currently sent by the client, and updating the operation position information of the current input device based on the operation position information currently sent by the client.

4. The system of claim 1,

the step S3 includes:

step S31, obtaining UI list { UI in target UI container1,UI2,…UINTherein, UIn=(An,Bn),UInThe value of N is the nth UI in the target UI container and ranges from 1 to N, AnIs a UInPosition information of (B)nIs a UInInitializing n = 1;

step S32, judgment BnWhether the operation position information is in the activated state, if not, executing the step S33, and if so, determining whether the operation position information is included in anWithin range, if yes, will UInStoring the data into a preset detection result array, and executing the step S33;

step S33, determining whether N is less than N, if so, setting N = N +1, returning to execute step S32, otherwise, executing step S34;

step S34, judging whether the length of the current detection result array is 0, if so, returning to null, wherein null represents that the target UI is empty, otherwise, each UI in the current detection result array is usednSorting the depth values of the top layer UInIt is determined as the target UI.

5. The system of claim 1,

the input device comprises at least one key and a roller, the input device, each key and the roller correspond to current input device state data, and the current input device state data comprises input device keys of a current activation event, whether a click event can be started or not, whether the click event is in a dragging state or not, selected and dragged UI information, selected and pressed UI information, entered UI information, pressed position information, input device position information, position offset information between each frame of the input device and roller difference information between each frame of the input device;

the UI events include an enter UI event, an exit UI event, a press UI event, a lift UI event, a start dragging UI event, a drag UI event, an end dragging UI event, a drop UI event, and a UI scrolling event.

6. The system of claim 5,

if the current input event is an input device wheel scroll event, the step S4 includes:

step S41, determining the current input device state data corresponding to the input device roller as target input device state data, determining a roller difference value based on the input device roller rolling value of the target input device state data and the input device roller rolling value sent by the client, and updating the roller difference values of all the current input device state data;

and step S42, if the wheel difference value is not 0 and the target UI is not empty, executing the target UI event based on the wheel difference value.

7. The system of claim 5,

if the current input event is an input device movement event, the step S4 includes:

step S401, determining current input equipment state data corresponding to input equipment as target input equipment state data;

step S402, traversing the state data of the target input device, and if the current entering UI is different from the target UI, executing: if the currently entered UI is not empty, executing an exit UI event on the currently entered UI; if the target UI is not empty, executing a UI entering event for the target UI;

step S403, updating the entering UI information in the target input device state data to the target UI, determining and updating the position offset information between each frame of the input device according to the position of the input device between the current adjacent frames, and updating the position information of the input device to the operation position information;

step S404, traversing the state data of the target input device, if the selected and pressed UI information is not empty and is not in a dragging state, and the position offset between each frame of the input device is not 0, executing a UI dragging event to the selected and pressed UI, updating the selected and dragged UI information in the state data of the target input device to be equal to the selected and pressed UI information, and changing the selected and dragged UI information to be in the dragging state information, otherwise, executing the UI dragging event to the currently selected and pressed UI if the current input device state data is in the dragging state.

8. The system of claim 5,

if the current input event is an input device key pressing event, the step S4 includes:

step S411, determining a current input device key based on the input data sent by the client, and determining current input device state data corresponding to the current input device key as target input device state data;

step S412, if the target UI is not empty, executing a UI pressing event on the target UI;

step S413, update whether the click event can be started in the target input device status data to a click event that can be started, update the selected and pressed UI information to the target UI, and update the input device location information to the operation location information.

9. The system of claim 5,

if the current input event is an input device lift key event, the step S4 includes:

step S421, determining a currently lifted input device key based on the input data sent by the client, and determining current input device state data corresponding to the currently lifted input device key as target input device state data;

step S422, traversing the target input device state data: if the condition that whether the target UI is in the dragging state or not is met, and the target UI is not empty, executing a UI lifting event on the target UI;

step S423, traversing the target input device state data: if the condition that whether the click event can be started is met, and the selected and dragged UI information is the same as the target UI and is not empty, executing the click UI event on the target UI;

step S423, traversing the target input device state data: if the target UI is not empty, executing a UI lifting event on the target UI;

step S424, traversing the target input device state data: if the data in the dragging state is in the dragging state, executing a UI dragging ending event on the target UI;

step S425, updating the data of whether the click event can be started in the state data of the target input equipment to the data of the click event which cannot be started, updating the selected and pressed UI information to be null, and updating the position information of the input equipment to be initial position information;

step S426, traversing the target input device state data: and if the condition that whether the data is in the dragging state is met, updating the data to be not in the dragging state, and updating the selected and dragged UI information to be null.

Background

The primary difference of the native real-time interaction processing program compared with the traditional real-time interaction processing program is that the traditional real-time interaction processing program establishes a process for each user at the cloud, the process only receives the input signal of the user and then outputs the rendering picture in the process to the local client of the user, the native real-time interaction processing program only establishes a process at the cloud, each user only needs to establish corresponding user information in the process, the process can receive the input signals of a plurality of users and perform related processing, and therefore expenses can be saved to a great extent.

In a conventional real-time interactive processing program, a picture in one process is formed by mixing a camera picture for rendering a User Interface (UI) and a camera picture for rendering a User view (including post-processing and the like), and only one UI camera and one User camera exist in one process, so that UI interaction only needs to be performed by directly reading local input. The traditional real-time interactive processing program is actually composed of a plurality of processes (one process for each user), namely (one process, one set of UI (user)) x n, wherein n represents the number of users, and the UI in each process only reads the input in the corresponding process. The native real-time interactive processing program consists of a process, a plurality of users and a plurality of sets of UIs, so that input signals of the users and the plurality of sets of UIs exist in the process, and a traditional UI framework cannot be used. Currently, there is no UI (User Interface) interaction solution applicable to native real-time interaction handlers.

Disclosure of Invention

The invention aims to provide a UI data processing system, so that a native real-time interaction processing program can quickly and accurately realize UI interaction.

According to an aspect of the present invention, there is provided a UI data processing system comprising a native real-time interaction handler process, a processor and a memory storing a computer program, the process comprising a plurality of user information data structures, each user information data structure being stored independently in the process, the user information data structures comprising a user id data segment, a current user input data segment, a UI container id data segment, a rendering camera id data segment and a current input device state data segment, when the processor is executing the computer program, the following steps being implemented:

step S1, receiving input data sent by a client, determining a current input event based on the input data, and updating the current user input data segment, wherein the input data comprises a target user id, input equipment state information, operation execution information and operation position information;

step S2, retrieving the process based on the target user id to determine a target UI container and a target rendering camera;

step S3, traversing the target UI container according to the operation position information to determine a target UI;

step S4, executing corresponding UI event based on the current input event, the target UI and the current input device state data, generating target UI display data, and updating the current input device state data segment and the current UI event data segment;

and step S5, generating target rendering data on the target rendering camera based on the target UI display data, and sending the target rendering data to a corresponding client.

Compared with the prior art, the invention has obvious advantages and beneficial effects. By the technical scheme, the UI data processing system provided by the invention can achieve considerable technical progress and practicability, has wide industrial utilization value and at least has the following advantages:

according to the invention, based on each user information data structure set in a process of the primary real-time interaction processing program, a UI framework suitable for the primary real-time interaction processing program is formed, so that the primary real-time interaction processing program can quickly and accurately realize UI interaction of each user, and the user experience is improved.

The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.

Drawings

Fig. 1 is a schematic diagram of a UI data processing system according to an embodiment of the present invention.

Detailed Description

To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a specific embodiment of a UI data processing system and its effects according to the present invention with reference to the accompanying drawings and preferred embodiments.

An embodiment of the present invention provides a UI data processing system, as shown in fig. 1, including a native real-time interactive processing program process, a processor, and a memory storing a computer program, where the process includes a plurality of user information data structures, each of the user information data structures is independently stored in the process, and the user information data structures include a user id data segment, a current user input data segment, a UI container id data segment, a rendering camera id data segment, and a current input device state data segment, and when the processor executes the computer program, the following steps are implemented:

step S1, receiving input data sent by a client, determining a current input event based on the input data, and updating the current user input data segment, wherein the input data comprises a target user id, input equipment state information, operation execution information and operation position information;

the input device may be a mouse, a keyboard, an operation handle, or the like. As an embodiment, the input device state information includes a default state, a pressed state, a lifted state; the operation execution information comprises operation instruction information and operation value information, the operation instruction information comprises movement, roller rolling and designated keys, and the operation value information comprises a roller rolling value; the current user input data segment comprises a current input equipment roller value, current input equipment operation position information and a key chain table which is pressed by the input equipment currently and is not lifted.

Step S2, retrieving the process based on the target user id to determine a target UI container and a target rendering camera;

step S3, traversing the target UI container according to the operation position information to determine a target UI;

step S4, executing corresponding UI event based on the current input event, the target UI and the current input device state data, generating target UI display data, and updating the current input device state data segment and the current UI event data segment;

and step S5, generating target rendering data on the target rendering camera based on the target UI display data, and sending the target rendering data to a corresponding client.

The local client may display changes in the UI data based on the target rendering data, where a process of generating the target rendering data by the rendering camera is implemented based on an existing rendering technology, and is not described herein again.

According to the embodiment of the invention, the UI framework suitable for the native real-time interaction processing program is formed based on the information data structure of each user set in the process of the native real-time interaction processing program, so that the native real-time interaction processing program can quickly and accurately realize the UI interaction of each user, and the user experience is improved.

In step S1, as an embodiment, a current input event is determined based on the input data, and the current user input data segment is updated,

step S11, determining a current input event based on the device state information and the operation instruction information currently sent by the client, wherein the input event comprises an input device roller rolling event, an input device moving event, an input device key pressing event and an input device key lifting event;

step S12, updating the key chain table that is currently pressed and not lifted based on the pressing state and the lifting state of the input device currently sent by the client, updating the roller value of the current input device based on the operation value information currently sent by the client, and updating the operation position information of the current input device based on the operation position information currently sent by the client.

The key chain table updating includes adding keys, deleting keys and the like, and the specific updating process can be realized based on the means in the prior art, and is not described herein again.

The embodiment of the invention maintains the information of different users in the same process, can quickly and accurately determine the information corresponding to the users based on the input of the users, updates the input data segment of the input users, determines the input event and improves the efficiency of subsequently executing the corresponding UI event.

As an example, step S3 includes:

step S31, obtaining UI list { UI in target UI container1,UI2,…UINTherein, UIn=(An,Bn),UInThe value of N is the nth UI in the target UI container and ranges from 1 to N, AnIs a UInPosition information of (B)nIs a UInInitializing n = 1;

step S32, judgment BnWhether the operation position information is in the activated state, if not, executing the step S33, and if so, determining whether the operation position information is included in anWithin range, if yes, will UInStoring the data into a preset detection result array, and executing the step S33;

step S33, determining whether N is less than N, if so, setting N = N +1, returning to execute step S32, otherwise, executing step S34;

step S34, judging whether the length of the current detection result array is 0, if so, returning to null, wherein null represents that the target UI is empty, otherwise, each UI in the current detection result array is usednSorting the depth values of the top layer UInIt is determined as the target UI.

The target UI can be determined quickly and accurately by step S3.

After determining the input event, how to determine which UI event is particularly important to be executed for which UI, which requires setting a corresponding input device state data structure and a specific UI interaction logic, as an embodiment, the input device includes at least one key and a roller, the input device, and each key and roller corresponds to a current input device state data, and the current input device state data includes an input device key of a current activation event, whether a click event can be started, whether the click event is in a dragging state, selected and dragged UI information, selected and pressed UI information, entered UI information, pressed position information, input device position information, position offset information between each frame of the input device, roller difference information between each frame of the input device, and the like. The UI events include an enter UI event, an exit UI event, a press UI event, a lift UI event, a start dragging UI event, a drag UI event, an end dragging UI event, a drop UI event, a UI scrolling event, and the like. As an example, each UI in the UI container defines an interface of each UI event, declares a corresponding UI event, and directly calls the UI event corresponding to the corresponding UI interface to execute when the UI event needs to be executed on the UI.

Based on different input events, different processing logic needs to be executed:

as an example, if the current input event is an input device scroll event, the step S4 includes:

step S41, determining the current input device state data corresponding to the input device roller as target input device state data, determining a roller difference value based on the input device roller rolling value of the target input device state data and the input device roller rolling value sent by the client, and updating the roller difference values of all the current input device state data;

and step S42, if the wheel difference value is not 0 and the target UI is not empty, executing the target UI event based on the wheel difference value.

As an embodiment, if the current input event is an input device movement event, the step S4 includes:

step S401, determining current input equipment state data corresponding to input equipment as target input equipment state data;

step S402, traversing the state data of the target input device, and if the current entering UI is different from the target UI, executing: if the currently entered UI is not empty, executing an exit UI event on the currently entered UI; if the target UI is not empty, executing a UI entering event for the target UI;

step S403, updating the entering UI information in the target input device state data to the target UI, determining and updating the position offset information between each frame of the input device according to the position of the input device between the current adjacent frames, and updating the position information of the input device to the operation position information;

step S404, traversing the state data of the target input device, if the selected and pressed UI information is not empty and is not in a dragging state, and the position offset between each frame of the input device is not 0, executing a UI dragging event to the selected and pressed UI, updating the selected and dragged UI information in the state data of the target input device to be equal to the selected and pressed UI information, and changing the selected and dragged UI information to be in the dragging state information, otherwise, executing the UI dragging event to the currently selected and pressed UI if the current input device state data is in the dragging state.

As an embodiment, if the current input event is an input device key-down event, the step S4 includes:

step S411, determining a current input device key based on the input data sent by the client, and determining current input device state data corresponding to the current input device key as target input device state data;

step S412, if the target UI is not empty, executing a UI pressing event on the target UI;

step S413, update whether the click event can be started in the target input device status data to a click event that can be started, update the selected and pressed UI information to the target UI, and update the input device location information to the operation location information.

As an embodiment, if the current input event is an input device lift key event, the step S4 includes:

step S421, determining a currently lifted input device key based on the input data sent by the client, and determining current input device state data corresponding to the currently lifted input device key as target input device state data;

step S422, traversing the target input device state data: if the condition that whether the target UI is in the dragging state or not is met, and the target UI is not empty, executing a UI lifting event on the target UI;

step S423, traversing the target input device state data: if the condition that whether the click event can be started is met, and the selected and dragged UI information is the same as the target UI and is not empty, executing the click UI event on the target UI;

step S423, traversing the target input device state data: if the target UI is not empty, executing a UI lifting event on the target UI;

step S424, traversing the target input device state data: if the data in the dragging state is in the dragging state, executing a UI dragging ending event on the target UI;

step S425, updating the data of whether the click event can be started in the state data of the target input equipment to the data of the click event which cannot be started, updating the selected and pressed UI information to be null, and updating the position information of the input equipment to be initial position information;

step S426, traversing the target input device state data: and if the condition that whether the data is in the dragging state is met, updating the data to be not in the dragging state, and updating the selected and dragged UI information to be null.

Different UI interaction logics are set based on different events, so that the native real-time interaction processing program can quickly and accurately realize the UI interaction of each user, and the user experience is improved.

It should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.

Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:窗口管理方法、装置、设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!