Application program picture processing method and device, electronic equipment and storage medium
1. A method for processing a screen of an application, the method comprising:
acquiring the picture processing duration corresponding to each stage of picture processing unit of an application program and the picture refreshing time interval of the application program;
the picture processing duration is the maximum processing duration required by the corresponding picture processing unit for outputting the picture frame corresponding to the application program;
determining the queue length of a buffer frame queue required by the application program to output a target picture frame and the target picture processing duration corresponding to each level of picture processing units based on the picture processing duration corresponding to each level of picture processing units and the picture refreshing time interval;
and outputting a target picture frame through each stage of picture processing units based on the queue length and the target picture processing duration corresponding to each stage of picture processing units.
2. The method of claim 1, wherein obtaining the screen processing duration corresponding to each stage of screen processing unit of the application program comprises:
respectively executing the following processing aiming at each level of picture processing units of the application program:
acquiring the target times of processing time required by the picture processing unit to output the picture frame corresponding to the application program;
and taking the maximum processing time length in the acquired processing time lengths of the target times as the picture processing time length corresponding to the picture processing unit.
3. The method of claim 2, wherein the collecting of the target number of times for the processing duration of the picture processing unit corresponding to the picture frame output by the application program comprises:
executing the following processing for each acquisition in the target times:
monitoring a first time point when a sub application program corresponding to the picture processing unit runs and a second time point when the sub application program corresponding to a next-stage picture processing unit of the picture processing unit runs by aiming at a non-last-stage picture processing unit through an objective function;
determining and collecting the processing duration corresponding to the non-last-stage picture processing unit based on the first time point and the second time point;
monitoring a third time point when the sub application program corresponding to the picture processing unit runs and a fourth time point when the sub application program corresponding to the picture processing unit finishes running by aiming at the picture processing unit at the last stage through the target function;
and determining and collecting the processing time length corresponding to the last-stage picture processing unit based on the third time point and the fourth time point.
4. The method of claim 3, wherein prior to said performing a target number of acquisitions, the method further comprises:
the target function is mounted to a target position in the application program corresponding to each level of picture processing unit in a program pile insertion mode, so that the time point when the sub-application program corresponding to each level of picture processing unit runs and the time point when the running is finished are monitored through the target function based on the target position;
the target position is used for indicating the starting position of the sub-application program corresponding to each level of picture processing unit in the application program and indicating the ending position of the sub-application program corresponding to the last level of picture processing unit.
5. The method according to claim 1, wherein the determining the queue length of the buffer frame queue required by the application program to output the target picture frame and the target picture processing duration corresponding to each stage of the picture processing unit based on the picture processing duration corresponding to each stage of the picture processing unit and the picture refresh time interval comprises:
determining a target picture processing duration corresponding to the corresponding picture processing unit based on the picture processing durations corresponding to the picture processing units at all levels and the picture refreshing time interval;
summing the target picture processing duration corresponding to each stage of the picture processing unit to obtain a summation result, and
and determining the queue length of a buffer frame queue required by the application program to output the target picture frame based on the summation result and the picture refreshing time interval.
6. The method of claim 1, wherein the application has an initial queue length corresponding to a buffer frame queue, and each level of the picture processing units has a corresponding initial picture processing duration;
outputting a target picture frame through each stage of the picture processing unit based on the queue length and the target picture processing duration corresponding to each stage of the picture processing unit, wherein the target picture frame comprises:
updating the initial queue length to the queue length, and updating the initial picture processing duration corresponding to each level of the picture processing unit to the corresponding target picture processing duration;
and outputting a target picture frame through each stage of picture processing units based on the updated initial queue length and the initial picture processing duration corresponding to each stage of picture processing units.
7. The method of claim 1, wherein the respective stages of picture processing units comprise: the device comprises an input acquisition unit, a calculation and rendering unit, a picture synthesis unit and a picture display unit;
outputting a target picture frame through each stage of the picture processing unit based on the queue length and the target picture processing duration corresponding to each stage of the picture processing unit, wherein the target picture frame comprises:
when the input acquisition unit detects an input event, the input event is sent to the calculation and rendering unit within the corresponding target picture processing duration;
when the calculating and rendering unit receives the input event and detects a first trigger signal, generating a picture frame layer corresponding to the input event within a corresponding target picture processing duration, and adding the picture frame layer into a buffer frame queue with the queue length;
when the picture synthesis unit detects a second trigger signal, within the corresponding target picture processing duration, obtaining a target picture frame layer from the cache frame queue, performing layer synthesis processing on the target picture frame layer to obtain the target picture frame, and performing layer synthesis processing on the target picture frame layer to obtain the target picture frame
When the picture synthesis unit detects a third trigger signal, the target picture frame is sent to the picture display unit;
and outputting the target picture frame through the picture display unit within the target picture processing duration corresponding to the picture display unit.
8. The method of claim 7, wherein the compute and render unit includes an interface thread and a render thread, and wherein generating the picture frame layer corresponding to the input event for the corresponding target picture processing duration when the compute and render unit detects the first trigger comprises:
when the calculating and rendering unit detects a first trigger signal, generating a picture drawing instruction based on the acquired picture data through the interface thread within the corresponding target picture processing duration;
and responding to the picture drawing instruction, calling a graphic processing unit to render the picture data through the rendering thread, and generating a picture frame layer corresponding to the input event.
9. The method of claim 7, wherein said retrieving, by the picture composition unit, the target picture frame layer from the buffered frame queue comprises:
acquiring a first-in first-out sequence corresponding to each picture frame layer in the cache frame queue;
and acquiring a target picture frame layer from the buffer frame queue through the picture synthesis unit according to the first-in first-out sequence.
10. The method of claim 1, wherein the method further comprises:
storing the picture processing duration corresponding to each stage of picture processing unit of the application program to a block chain network;
the acquiring of the picture processing duration corresponding to each stage of picture processing unit of the application program includes:
generating and sending a transaction for acquiring the picture processing duration corresponding to each level of picture processing unit in the block chain network;
and receiving the picture processing duration corresponding to each level of picture processing units returned by the block chain network based on the transaction.
11. The method of claim 1, wherein each stage of the picture processing unit of the application has a corresponding initial picture processing duration;
the acquiring of the picture processing duration corresponding to each stage of picture processing unit of the application program includes:
acquiring the frequency that the historical picture processing duration of each level of picture processing unit exceeds the initial picture processing duration;
determining the picture processing unit of which the frequency reaches the frequency threshold value in each level of picture processing units as a target picture processing unit;
and acquiring the picture processing duration corresponding to the target picture processing unit.
12. The method of claim 1, wherein obtaining the screen processing duration corresponding to each stage of screen processing unit of the application program comprises:
performing pause time length detection on a plurality of image frames output by the application program and within a target time length, and determining the pause total time length corresponding to the plurality of image frames;
determining a pause score corresponding to the application program based on the total pause time length and the target time length;
and when the stuck score reaches a stuck score threshold value, acquiring the picture processing duration corresponding to each level of picture processing unit of the application program.
13. An apparatus for processing a screen of an application, the apparatus comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring the picture processing duration corresponding to each stage of picture processing unit of an application program and the picture refreshing time interval of the application program;
the picture processing duration is the maximum processing duration required by the corresponding picture processing unit for outputting the picture frame corresponding to the application program;
a determining module, configured to determine, based on the picture processing durations corresponding to the picture processing units at each level and the picture refreshing time interval, a queue length of a buffer frame queue required by the application to output a target picture frame, and a target picture processing duration corresponding to the picture processing units at each level;
and the output module is used for outputting a target picture frame through each stage of picture processing units based on the queue length and the target picture processing duration corresponding to each stage of picture processing units.
14. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing a picture processing method of an application program according to any one of claims 1 to 12 when executing executable instructions stored in the memory.
15. A computer-readable storage medium storing executable instructions for implementing a picture processing method of an application program according to any one of claims 1 to 12 when executed.
Background
In the related art, the buffer queue lengths required by all application program output pictures and the processing durations of all levels of picture processing units are preset and fixed, while the processing durations and the buffer queue lengths required by different output pictures are different, and the set processing durations and the buffer queue lengths cannot meet the condition that all application program pictures are output within a set refreshing time interval, for example, if the time consumed for calculating and rendering some pictures is long, a picture card frame phenomenon occurs.
Disclosure of Invention
The embodiment of the application provides a picture processing method and device for an application, an electronic device and a storage medium, which can reduce the possibility of picture frame phenomenon and improve the fluency of an application picture.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a picture processing method of an application program, which comprises the following steps:
acquiring the picture processing duration corresponding to each stage of picture processing unit of an application program and the picture refreshing time interval of the application program;
the picture processing duration is the maximum processing duration required by the corresponding picture processing unit for outputting the picture frame corresponding to the application program;
determining the queue length of a buffer frame queue required by the application program to output a target picture frame and the target picture processing duration corresponding to each level of picture processing units based on the picture processing duration corresponding to each level of picture processing units and the picture refreshing time interval;
and outputting a target picture frame through each stage of picture processing units based on the queue length and the target picture processing duration corresponding to each stage of picture processing units.
In the foregoing solution, the obtaining of the picture processing duration corresponding to each stage of the picture processing unit of the application program includes:
performing pause time length detection on a plurality of image frames output by the application program and within a target time length, and determining the pause total time length corresponding to the plurality of image frames;
determining a pause score corresponding to the application program based on the total pause time length and the target time length;
and when the stuck score reaches a stuck score threshold value, acquiring the picture processing duration corresponding to each level of picture processing unit of the application program.
In the foregoing solution, the performing pause time length detection on a plurality of image frames output by the application program and within a target time length to determine a total pause time length corresponding to the plurality of image frames includes:
acquiring actual picture processing duration required by the application program to output each picture frame and standard picture processing duration corresponding to the application program to output the picture frames;
determining the pause time length corresponding to each picture frame based on the actual picture processing time length required by each picture frame and the standard picture processing time length;
and determining the total pause time length corresponding to the plurality of picture frames based on the pause time length corresponding to each picture frame.
An embodiment of the present application further provides an image processing apparatus for an application, including:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring the picture processing duration corresponding to each stage of picture processing unit of an application program and the picture refreshing time interval of the application program;
the picture processing duration is the maximum processing duration required by the corresponding picture processing unit for outputting the picture frame corresponding to the application program;
a determining module, configured to determine, based on the picture processing durations corresponding to the picture processing units at each level and the picture refreshing time interval, a queue length of a buffer frame queue required by the application to output a target picture frame, and a target picture processing duration corresponding to the picture processing units at each level;
and the output module is used for outputting a target picture frame through each stage of picture processing units based on the queue length and the target picture processing duration corresponding to each stage of picture processing units.
In the foregoing solution, the obtaining module is further configured to execute the following processing for each level of the picture processing unit of the application program respectively:
acquiring the target times of processing time required by the picture processing unit to output the picture frame corresponding to the application program;
and taking the maximum processing time length in the acquired processing time lengths of the target times as the picture processing time length corresponding to the picture processing unit.
In the foregoing scheme, the obtaining module is further configured to execute the following processing for each acquisition in the target times:
monitoring a first time point when a sub application program corresponding to the picture processing unit runs and a second time point when the sub application program corresponding to a next-stage picture processing unit of the picture processing unit runs by aiming at a non-last-stage picture processing unit through an objective function;
determining and collecting the processing duration corresponding to the non-last-stage picture processing unit based on the first time point and the second time point;
monitoring a third time point when the sub application program corresponding to the picture processing unit runs and a fourth time point when the sub application program corresponding to the picture processing unit finishes running by aiming at the picture processing unit at the last stage through the target function;
and determining and collecting the processing time length corresponding to the last-stage picture processing unit based on the third time point and the fourth time point.
In the above scheme, the obtaining module is further configured to mount the target function to a target position in the application program corresponding to each level of the picture processing unit in a program instrumentation manner, so as to monitor, through the target function, a time point when the sub-application program corresponding to each level of the picture processing unit runs and a time point when the running is finished based on the target position;
the target position is used for indicating the starting position of the sub-application program corresponding to each level of picture processing unit in the application program and indicating the ending position of the sub-application program corresponding to the last level of picture processing unit.
In the above scheme, the determining module is further configured to determine a target picture processing duration corresponding to the corresponding picture processing unit based on the picture processing durations corresponding to the picture processing units at each level and the picture refreshing time interval;
summing the target picture processing duration corresponding to each stage of the picture processing unit to obtain a summation result, and
and determining the queue length of a buffer frame queue required by the application program to output the target picture frame based on the summation result and the picture refreshing time interval.
In the above scheme, the application program has an initial queue length corresponding to the buffer frame queue, and each stage of the picture processing unit has a corresponding initial picture processing duration; the output module updates the initial queue length to the queue length and updates the initial picture processing duration corresponding to each stage of the picture processing unit to the corresponding target picture processing duration;
and outputting a target picture frame through each stage of picture processing units based on the updated initial queue length and the initial picture processing duration corresponding to each stage of picture processing units.
In the above solution, the image processing units at each level include: the device comprises an input acquisition unit, a calculation and rendering unit, a picture synthesis unit and a picture display unit; the output module is further used for sending the input event to the calculating and rendering unit within the corresponding target picture processing duration when the input acquisition unit detects the input event;
when the calculating and rendering unit receives the input event and detects a first trigger signal, generating a picture frame layer corresponding to the input event within a corresponding target picture processing duration, and adding the picture frame layer into a buffer frame queue with the queue length;
when the picture synthesis unit detects a second trigger signal, within the corresponding target picture processing duration, obtaining a target picture frame layer from the cache frame queue, performing layer synthesis processing on the target picture frame layer to obtain the target picture frame, and performing layer synthesis processing on the target picture frame layer to obtain the target picture frame
When the picture synthesis unit detects a third trigger signal, the target picture frame is sent to the picture display unit;
and outputting the target picture frame through the picture display unit within the target picture processing duration corresponding to the picture display unit.
In the above scheme, the calculation and rendering unit includes an interface thread and a rendering thread, and the output module is further configured to generate, within a corresponding target picture processing duration, a picture drawing instruction based on the acquired picture data through the interface thread when the calculation and rendering unit detects the first trigger signal;
and responding to the picture drawing instruction, calling a graphic processing unit to render the picture data through the rendering thread, and generating a picture frame layer corresponding to the input event.
In the above scheme, the output module is further configured to obtain a first-in first-out sequence corresponding to each picture frame layer in the buffer frame queue;
and acquiring a target picture frame layer from the buffer frame queue through the picture synthesis unit according to the first-in first-out sequence.
In the above scheme, the apparatus further comprises:
the storage module is used for storing the picture processing duration corresponding to each stage of picture processing unit of the application program to the block chain network;
the acquiring of the picture processing duration corresponding to each stage of picture processing unit of the application program includes:
generating and sending a transaction for acquiring the picture processing duration corresponding to each level of picture processing unit in the block chain network;
and receiving the picture processing duration corresponding to each level of picture processing units returned by the block chain network based on the transaction.
In the above scheme, each level of the picture processing unit of the application program has a corresponding initial picture processing duration; the acquisition module is also used for acquiring the frequency that the historical picture processing duration of each level of picture processing unit exceeds the initial picture processing duration;
determining the picture processing unit of which the frequency reaches the frequency threshold value in each level of picture processing units as a target picture processing unit;
and acquiring the picture processing duration corresponding to the target picture processing unit.
In the above scheme, the obtaining module is further configured to perform pause time length detection on a plurality of picture frames output by the application program and within a target time length, and determine a total pause time length corresponding to the plurality of picture frames;
determining a pause score corresponding to the application program based on the total pause time length and the target time length;
and when the stuck score reaches a stuck score threshold value, acquiring the picture processing duration corresponding to each level of picture processing unit of the application program.
In the above scheme, the obtaining module is further configured to obtain an actual picture processing duration required by the application to output each picture frame, and a standard picture processing duration corresponding to the picture frame output by the application;
determining the pause time length corresponding to each picture frame based on the actual picture processing time length required by each picture frame and the standard picture processing time length;
and determining the total pause time length corresponding to the plurality of picture frames based on the pause time length corresponding to each picture frame.
An embodiment of the present application further provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the picture processing method of the application program provided by the embodiment of the application program when the executable instruction stored in the memory is executed.
The embodiment of the present application further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the method for processing the screen of the application program provided by the embodiment of the present application is implemented.
The embodiment of the application has the following beneficial effects:
determining the queue length of a buffer frame queue required by the application program to output a target picture frame and the target picture processing time length corresponding to each level of picture processing unit based on the maximum processing time length and the picture refreshing time interval corresponding to each level of picture processing unit and the picture refreshing time interval required by the application program to output the picture frame corresponding to each level of picture processing unit by obtaining the maximum processing time length and the picture refreshing time interval corresponding to each level of picture processing unit, and outputting the target picture frame based on the determined queue length and the target picture processing time length corresponding to each level of picture processing unit;
the required queue length of the buffer frame queue and the target picture processing duration corresponding to each stage of picture processing unit are determined based on the maximum processing duration and the picture refreshing time interval required by the application program corresponding to each stage of picture processing unit to output the picture frames, so that each picture frame can be ensured to be output in the set refreshing time interval, the possibility of picture frame clamping is reduced, and the smoothness of the application program pictures is improved.
Drawings
FIG. 1 is a schematic diagram of an architecture of a screen processing system 100 of an application according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device 500 implementing a screen processing method of an application according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a screen processing method of an application according to an embodiment of the present application;
FIG. 4 is a schematic processing flow diagram of a three-buffer frame queue and a four-level frame processing unit according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a method for processing a screen of an application according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a process of a timer of a frame processing unit according to an embodiment of the present disclosure;
FIG. 7 is a flowchart illustrating a four-level buffering frame queue and a four-level frame processing unit according to an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating a process of a three-buffer frame queue and a four-level frame processing unit according to an embodiment of the present disclosure;
FIG. 9 is a flowchart illustrating a process of a three-buffer frame queue and a four-level frame processing unit according to an embodiment of the present disclosure;
FIG. 10 is a flowchart illustrating a four-level buffering frame queue and a four-level frame processing unit according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram of image fluency comparison data provided by an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The terminal comprises a client and an application program running in the terminal and used for providing various services, such as an instant messaging client and a video playing client.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) Vertical synchronization: in order to make the frequency of the system drawing the UI consistent with the refresh rate of the screen hardware, the Android drawing system introduces the concept of VSYNC (vertical synchronization). For example, on a mobile phone with a screen refresh rate of 60Hz, the Android system sends a VSYNC signal every 1/60 seconds, and when the drawing module receives the signal, the drawn picture is sent to the screen. If the drawing period (the CPU time consumption and the GPU time consumption required for drawing one frame) of each frame is less than 1/60 seconds, the latest content can be timely released to the screen every time the screen is refreshed. Otherwise, jamming may result.
Based on the above explanations of terms and terms involved in the embodiments of the present application, the following describes a screen processing system of an application program provided in the embodiments of the present application. Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a screen processing system 100 for an application provided in an embodiment of the present application, in order to support an exemplary application, a terminal (an exemplary terminal 400-1 is shown) is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless or wired link.
A terminal (e.g., terminal 400-1) for sending an acquisition request of a picture processing duration corresponding to each level of picture processing unit of an application to the server 200;
the server 200 is used for receiving and responding to the acquisition request, and returning the picture processing duration corresponding to each level of picture processing unit of the application program to the terminal;
the terminal (such as the terminal 400-1) is used for receiving the picture processing duration corresponding to each stage of picture processing unit of the application program and acquiring the picture refreshing time interval of the application program; determining the queue length of a buffer frame queue required by an application program to output a target picture frame and the target picture processing duration corresponding to each level of picture processing units based on the picture processing duration corresponding to each level of picture processing units and the picture refreshing time interval; and outputting the target picture frame through each stage of picture processing unit based on the queue length and the target picture processing duration corresponding to each stage of picture processing unit.
Here, the picture processing time period is a maximum processing time period required for the corresponding picture processing unit to output the picture frame corresponding to the application.
In practical application, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal (e.g., terminal 400-1) may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart television, a smart watch, and the like. The terminals (e.g., terminal 400-1 and terminal 400-2) and the server 200 may be directly or indirectly connected through wired or wireless communication, and the application is not limited thereto.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 implementing a screen processing method of an application according to an embodiment of the present application. In practical applications, the electronic device 500 may be a server or a terminal shown in fig. 1, and an electronic device implementing the screen processing method of the application program according to an embodiment of the present application is described by taking the electronic device 500 as the terminal shown in fig. 1 as an example, where the electronic device 500 provided in the embodiment of the present application includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the image processing apparatus of the application provided in the embodiments of the present application may be implemented in software, and fig. 2 illustrates an image processing apparatus 555 of the application stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: an obtaining module 5551, a determining module 5552 and an output module 5553, which are logical and thus can be arbitrarily combined or further split according to the implemented functions, which will be described below.
In other embodiments, the picture processing Device of the Application program provided in this embodiment may be implemented by a combination of hardware and software, and by way of example, the picture processing Device of the Application program provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the picture processing method of the Application program provided in this embodiment, for example, the processor in the form of a hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Based on the above description of the screen processing system and the electronic device of the application program provided in the embodiments of the present application, the screen processing method of the application program provided in the embodiments of the present application is described below. In some embodiments, the screen processing method of the application provided in the embodiments of the present application may be implemented by a server or a terminal alone, or implemented by a server and a terminal in a cooperation manner, and the screen processing method of the application provided in the embodiments of the present application is described below with an embodiment of a terminal as an example. Referring to fig. 3, fig. 3 is a schematic flowchart of a screen processing method of an application program according to an embodiment of the present application, where the screen processing method of the application program according to the embodiment of the present application includes:
step 101: the terminal acquires the picture processing duration corresponding to each stage of picture processing unit of the application program and the picture refreshing time interval of the application program.
The frame processing duration is the maximum processing duration required by the corresponding frame processing unit corresponding to the application program to output the frame.
Here, the terminal is installed with an application program, and when the application program is running, a screen frame output by the application program is displayed through a display screen of the terminal. In practical application, the frame of the terminal screen refresh display is related to the screen refresh time interval, for example, in a mobile phone terminal with a screen refresh rate of 60Hz, the corresponding screen refresh time interval is 1/60 seconds, and the terminal sends the rendered frame of the terminal screen to the screen for display every 1/60 seconds. In actual implementation, the terminal performs processing such as calculation, rendering, and output of a picture frame through each level of picture processing units corresponding to the application program, where processing operations corresponding to different picture processing units are different, and corresponding picture processing durations are also different.
In the embodiment of the present application, the picture processing duration required by the picture processing unit corresponding to the application program to output the picture frame is obtained, so that based on the picture processing duration, the target picture processing duration required by the picture processing unit corresponding to the application program to output the picture frame is calculated, so as to ensure that each picture frame can be output within a set duration (for example, within a screen refresh time interval) when the picture frame is output. Therefore, the frame processing time is the maximum processing time required by the corresponding frame processing unit to output the frame corresponding to the application program. The target picture processing time length required by the picture processing unit corresponding to the picture frame output by the application program is obtained through calculation of the maximum processing time length, so that when the picture frame is output, each picture frame can be output within a set time length (such as within a screen refreshing time interval), and the phenomenon of frame clamping cannot occur.
After the terminal acquires the picture processing duration corresponding to each stage of picture processing unit of the application program, it is further required to continuously acquire the picture refresh time interval of the application program, where the picture refresh time interval of the application program is required to be consistent with the screen refresh time interval, that is, the screen refresh time interval may be used as the picture refresh time interval of the application program.
In some embodiments, the terminal may obtain the picture processing duration corresponding to each stage of the picture processing unit of the application program by: the following processing is executed respectively for each stage of picture processing unit of the application program: acquiring the target times of processing time required by outputting the picture frame by the application program corresponding to the picture processing unit; and taking the maximum processing time length in the acquired processing time lengths of the target times as the picture processing time length corresponding to the picture processing unit.
Here, the terminal may perform the following processes for respective stages of screen processing units of the application program: presetting target times for acquisition, and then acquiring the target times by a picture processing unit corresponding to processing duration required by an application program to output picture frames; and taking the maximum processing time length in the acquired processing time lengths of the target times as the picture processing time length corresponding to the picture processing unit.
In some embodiments, the terminal may collect the target times by processing a processing duration required by the image processing unit to output the image frame corresponding to the application program in the following manner:
the following processing is performed for each acquisition in the target number of times: monitoring a first time point when a sub application program corresponding to the picture processing unit runs and a second time point when the sub application program corresponding to the next-stage picture processing unit of the picture processing unit runs by aiming at the picture processing unit which is not the last stage through an objective function; determining and collecting processing time corresponding to the non-last-stage picture processing unit based on the first time point and the second time point; monitoring a third time point when the sub application program corresponding to the picture processing unit runs and a fourth time point when the sub application program corresponding to the picture processing unit runs and finishes through an objective function aiming at the picture processing unit at the last stage; and determining and collecting the processing time length corresponding to the picture processing unit of the last stage based on the third time point and the fourth time point.
In some embodiments, the terminal may implement listening at a time point by: mounting the target function to a target position in the application program corresponding to each level of picture processing unit in a program pile inserting mode, and monitoring a time point when the sub-application program corresponding to each level of picture processing unit runs and a time point when the running is finished on the basis of the target position through the target function; the target position is used for indicating the starting position of the sub-application program corresponding to each level of picture processing unit in the application program and indicating the ending position of the sub-application program corresponding to the last level of picture processing unit.
Here, the target function (e.g., hook function) may be installed in the application program by program instrumentation, with a start position of the sub-application program corresponding to each stage of the screen processing unit in the application program and an end position of the sub-application program indicating the corresponding last stage of the screen processing unit. Therefore, the time point when the sub-application program corresponding to each level of picture processing unit runs and the time point when the running is finished are monitored based on the target position through the target function.
Specifically, referring to fig. 6, fig. 6 is a schematic diagram of a pipeline meter according to an embodiment of the present applicationThe processing flow of the timer is shown schematically. Here, taking the operating system as an example of the Android system, the Android system includes a four-level picture processing unit, and the monitoring of the processing duration corresponding to each level of picture processing unit is realized by inserting a Hook function at the starting position of the sub application program of each level of picture processing unit and the ending position of the sub application program of the last level of picture processing unit. The processing time length corresponding to each stage of the processing unit is respectively as follows: the processing time length corresponding to the input acquisition unit is TInputCalculating the processing time length corresponding to the rendering unit as TAPP(including UI thread->Render thread->GPU render), the processing time length corresponding to the picture synthesis unit is TSF(surface flag) and the processing time length of the screen display unit is TDISP(Display)。
In practical application, in each acquisition of the processing duration of the picture processing unit, aiming at a non-final-stage picture processing unit, monitoring a first time point when a sub-application program corresponding to the picture processing unit runs and a second time point when the sub-application program corresponding to a next-stage picture processing unit of the picture processing unit runs through an objective function; determining and collecting processing time length corresponding to the non-last-stage picture processing unit based on the difference value of the first time point and the second time point; monitoring a third time point when the sub application program corresponding to the picture processing unit runs and a fourth time point when the sub application program corresponding to the picture processing unit runs and finishes through an objective function aiming at the picture processing unit at the last stage; and determining and collecting the processing time length corresponding to the picture processing unit of the last stage based on the difference value of the third time point and the fourth time point.
In some embodiments, each level of the picture processing unit of the application program has a corresponding initial picture processing duration; the terminal can acquire the picture processing duration corresponding to each stage of picture processing unit of the application program in the following way: acquiring the frequency that the historical picture processing duration of each level of picture processing unit exceeds the initial picture processing duration; determining a picture processing unit with the frequency reaching a frequency threshold value in each level of picture processing units as a target picture processing unit; and acquiring the picture processing duration corresponding to the target picture processing unit.
In practical application, the terminal also sets a corresponding initial screen processing duration for each level of the screen processing unit, and the initial screen processing duration may also be default for an operating system (such as an android system) configured by the terminal, for example, at a screen refresh time interval (i.e., HW)VSyncTime) as the initial picture processing duration of each level of picture processing unit.
In practical application, the actual processing duration of a part of the picture processing units exceeds the initial picture processing duration, the actual processing duration of the part of the picture processing units does not exceed the initial picture processing duration, and the picture processing duration corresponding to the picture processing unit of which the actual processing duration exceeds the initial picture processing duration needs to be readjusted for the picture processing unit of which the actual processing duration exceeds the initial picture processing duration.
Specifically, the frequency that the historical image processing duration of each level of image processing unit exceeds the initial image processing duration may be obtained, for example, the processing duration of the image frame output by each level of image processing unit may be collected, the historical image processing duration of each level of image processing unit may be obtained multiple times, and then the number of times that the actual processing duration of each level of image processing unit exceeds the initial image processing duration may be determined, so that the frequency that the historical image processing duration of each level of image processing unit exceeds the initial image processing duration may be determined based on the determined number of times and the collected number of times. And then determining the picture processing unit with the frequency reaching the frequency threshold value from all levels of picture processing units as a target picture processing unit, thereby acquiring the picture processing time length corresponding to the target picture processing unit. Thus, the waste of hardware processing resources can be reduced.
In some embodiments, the terminal may obtain the picture processing duration corresponding to each stage of the picture processing unit of the application program by: performing pause time length detection on a plurality of image frames output by an application program and within a target time length, and determining the pause total time length corresponding to the plurality of image frames; determining a pause score corresponding to the application program based on the total pause time and the target time; and when the pause score reaches a pause score threshold value, acquiring the picture processing duration corresponding to each level of picture processing unit of the application program.
In practical applications, the screen frame output by the application program running on the terminal may be detected in real time, or at preset time intervals, for example, every 2 hours, or according to a change of the application program currently running on the terminal. When detecting that the picture frame displayed by the terminal has pause, the queue length of the buffer frame queue and the processing time length corresponding to each level of picture processing unit need to be adjusted. At this time, the picture processing time length corresponding to each stage of the picture processing unit of the application program, that is, the maximum processing time length, may be obtained.
In actual implementation, the detection time period, i.e., the target time period, may be set in advance. Performing pause time length detection on a plurality of image frames output by an application program and within a target time length so as to determine the pause total time length corresponding to the plurality of image frames; then, determining a stuck score corresponding to the application program based on the total stuck time and the target time, specifically, calculating a ratio of the total stuck time to the target time, taking the ratio as the stuck score corresponding to the application program, wherein the stuck score can be used for describing a proportion of stuck pictures in the target time, or a probability of stuck, and the like; and finally, when the stuck score reaches the stuck score threshold, acquiring the picture processing duration corresponding to each level of picture processing unit of the application program, and executing the subsequent steps 102-103 based on the acquired picture processing duration corresponding to each level of picture processing unit.
In some embodiments, the terminal may determine the total pause duration corresponding to the plurality of picture frames by: acquiring actual picture processing duration required by the application program to output each picture frame and standard picture processing duration corresponding to the application program to output the picture frames; determining the pause time length corresponding to each picture frame based on the actual picture processing time length required by each picture frame and the standard picture processing time length; and determining the total pause time length corresponding to the plurality of picture frames based on the pause time length corresponding to each picture frame.
Here, when calculating the total pause time length corresponding to a plurality of picture frames in the target time length, the actual picture processing time length required by the application program to output each picture frame and the standard picture processing time length corresponding to the application program to output the picture frame may be obtained first; then, based on the actual picture processing duration and the standard picture processing duration required by each picture frame, determining the pause duration corresponding to each picture frame, specifically, calculating the difference value between the actual picture processing duration and the standard picture processing duration required by each picture frame, and taking the difference value as the pause duration corresponding to each picture frame; therefore, the total pause time length corresponding to the multiple image frames is determined based on the pause time length corresponding to each image frame, specifically, the pause time length corresponding to each image frame is summed, and the summed result is used as the total pause time length corresponding to the multiple image frames.
In some embodiments, the terminal may store the image processing duration corresponding to each stage of image processing unit of the application program to the blockchain network; correspondingly, the terminal can obtain the picture processing duration corresponding to each stage of picture processing unit of the application program in the following way: generating and sending a transaction for acquiring the picture processing duration corresponding to each level of picture processing unit in the block chain network; and receiving the picture processing duration corresponding to each level of picture processing units returned by the block chain network based on the transaction.
Here, the terminal may store the picture processing duration corresponding to each level of the picture processing unit of the application program in the blockchain network, and may directly obtain the picture processing duration corresponding to each level of the picture processing unit of the application program from the blockchain network when the picture processing duration corresponding to each level of the picture processing unit of the application program needs to be obtained. Specifically, a transaction for acquiring the image processing duration corresponding to each level of image processing unit in the blockchain network is generated and sent to the blockchain network; after receiving the transaction, the blockchain network returns the picture processing duration corresponding to each level of picture processing unit of the application program to the terminal; and the terminal receives the picture processing duration corresponding to each level of picture processing unit returned by the block chain network based on the transaction.
Step 102: and determining the queue length of a buffer frame queue required by the application program to output the target picture frame and the target picture processing time length corresponding to each level of picture processing units based on the picture processing time length corresponding to each level of picture processing units and the picture refreshing time interval.
After acquiring the picture processing duration and the picture refreshing time interval corresponding to each stage of picture processing unit, the terminal calculates the queue length of the buffer frame queue required by the application program operated by the terminal to output the target picture frame and the target picture processing duration of each stage of picture processing unit so as to determine the queue length of the required buffer frame queue and the target picture processing duration corresponding to each stage of picture processing unit.
In some embodiments, based on the picture processing duration and the picture refresh time interval corresponding to each level of the picture processing unit, the terminal may determine the queue length of the buffer frame queue required by the application to output the target picture frame and the target picture processing duration corresponding to each level of the picture processing unit by: determining a target picture processing duration corresponding to a corresponding picture processing unit based on picture processing durations corresponding to all levels of picture processing units and picture refreshing time intervals; and summing the target picture processing durations corresponding to the picture processing units at all levels to obtain a summation result, and determining the queue length of a buffer frame queue required by the application program to output the target picture frame based on the summation result and the picture refreshing time interval.
In practical application, the terminal may determine the target picture processing duration corresponding to the corresponding picture processing unit according to the following formula based on the picture processing durations corresponding to the picture processing units at all levels and the picture refreshing time interval:
PLUDi=(Floor(PLUTi-Max/HWVSyncTime)+1)*HWVSyncTime;
the terminal can determine the queue length of a buffer frame queue required by the application program to output the target picture frame based on the target picture processing duration and the picture refreshing time interval corresponding to each stage of picture processing unit through the following formula:
BufferQueueLen=Floor((PLUD1+PLUD2+PLUD3+...PLUDi)/HWVSyncTime);
wherein, PLUTI-MaxMax () is a function for solving the maximum value, which is the maximum processing time measured by operation in the preset times in the ith-level picture processing unit; PLUDiProcessing time length of a target picture corresponding to the ith level picture processing unit; floor () is a rounding function; HW (HW) powerVSyncThe Time is a picture refreshing Time interval; BufferQueuelen is the queue length of the buffer frame queue.
Step 103: and outputting the target picture frame through each stage of picture processing unit based on the queue length and the target picture processing duration corresponding to each stage of picture processing unit.
After acquiring the queue length of the buffer frame queue required by the application program to output the target picture frame and the target picture processing duration corresponding to each level of picture processing unit, the terminal outputs the target picture frame of the application program through each level of picture processing unit based on the queue length and the target picture processing duration corresponding to each level of picture processing unit.
In some embodiments, the application has an initial queue length corresponding to the buffer frame queue, and each level of the image processing unit has a corresponding initial image processing duration; the terminal can output the target picture frame through each level of picture processing units based on the queue length and the target picture processing duration corresponding to each level of picture processing units in the following way: updating the initial queue length to a queue length, and updating the initial picture processing duration corresponding to each level of picture processing unit to a corresponding target picture processing duration; and outputting the target picture frame through each level of picture processing units based on the updated initial queue length and the initial picture processing duration corresponding to each level of picture processing units.
In practical application, the terminal sets an initial queue length of a corresponding buffer frame queue for an application program running, where the initial queue length may be a default of an operating system (such as an android system) configured by the terminal, such as a triple buffer queue length; meanwhile, corresponding initial pictures are also set for all levels of picture processing unitsThe duration of the surface processing, which may also be default for an operating system (such as an android system) configured in the terminal, is for example the duration of the screen refresh time (i.e. HW)VSyncTime) as the initial picture processing duration of each level of picture processing unit.
Based on the above, after acquiring the queue length of the buffer frame queue required by the application program to output the target picture frame and the target picture processing duration corresponding to each stage of picture processing unit, the terminal updates the initial queue length and the initial picture processing duration corresponding to each stage of picture processing unit. Specifically, the initial queue length is updated to the determined queue length, and the initial image processing duration corresponding to each stage of image processing unit is updated to the corresponding target image processing duration. Therefore, the terminal can output the target picture frame through each level of picture processing units based on the updated initial queue length and the initial picture processing duration corresponding to each level of picture processing units.
In some embodiments, each stage of the picture processing unit includes: the device comprises an input acquisition unit, a calculation and rendering unit, a picture synthesis unit and a picture display unit;
the terminal can output the target picture frame through each level of picture processing units based on the queue length and the target picture processing duration corresponding to each level of picture processing units in the following way: when the input acquisition unit detects an input event, the input event is sent to the calculation and rendering unit within the corresponding target picture processing duration; when the calculation and rendering unit receives an input event and detects a first trigger signal, generating a picture frame layer corresponding to the input event within the corresponding target picture processing duration, and adding the picture frame layer into a buffer frame queue with a queue length; when the picture synthesis unit detects a second trigger signal, a target picture frame layer is obtained from the buffer frame queue within the corresponding target picture processing duration, the picture synthesis processing is carried out on the target picture frame layer to obtain a target picture frame, and when the picture synthesis unit detects a third trigger signal, the target picture frame is sent to the picture display unit; and outputting the target picture frame through the picture display unit within the target picture processing duration corresponding to the picture display unit.
Here, the input event pieces include, but are not limited to, an input event of a screen, an input event corresponding to rotation of the terminal device, and an input event input through an input device such as a mouse, a keyboard, and a sub-handle, where the input event of the screen includes, but is not limited to, an input event corresponding to a touch operation such as a screen click or a screen sliding detected by a screen sensor, and the input event corresponding to rotation of the terminal device may be an input event corresponding to rotation of the terminal device detected by an angle sensor.
It should be noted that the first trigger information may be an application vertical synchronization signal (i.e. APP)VSync) Triggering the computing and rendering unit to perform the processing operation; the second trigger signal may be a composite vertical synchronization signal (i.e., SF)VSync) And a third trigger signal, which may be a hardware vertical synchronization signal, for triggering the picture composition unit to perform a processing operation, and a user triggers the picture composition unit to transmit the synthesized picture frame to the picture display unit. In practical applications, the picture frame layer may include text, graphics, images, tables, etc. corresponding to the generated corresponding picture frame. In practical implementation, when multiple applications are simultaneously run and the terminal screen needs to simultaneously present the pictures of the multiple applications, here, the terminal may obtain the target picture frame layer of each application run at that time, so as to synthesize the target picture frame based on the obtained target picture frame layers.
In some embodiments, the calculation and rendering unit includes an interface thread and a rendering thread, and the terminal may generate a picture frame layer corresponding to the input event by: when the calculating and rendering unit detects the first trigger signal, generating a picture drawing instruction based on the acquired picture data through the interface thread within the corresponding target picture processing duration; and responding to the picture drawing instruction, calling a graphic processing unit to render the picture data through the rendering thread, and generating a picture frame layer corresponding to the input event.
Here, the compute and render unit includes an interface thread (i.e., UI line)Program) and a rendering thread, which are implemented by the UI thread and the rendering thread when the calculation and rendering unit generates the picture frame layer corresponding to the input event. In particular, when the calculation and rendering unit detects a first trigger signal, such as an application vertical synchronization signal (i.e. APP)VSync) Acquiring picture data required for generating a picture frame layer through an interface thread in calculating a target picture processing duration corresponding to the rendering unit, and then generating a picture drawing instruction based on the acquired picture data; and transmitting the picture drawing instruction to a rendering thread, and rendering the picture data through the rendering thread in response to the picture drawing instruction, specifically, calling a Graphic Processing Unit (GPU) through the rendering thread to render the picture data in order to accelerate the rendering speed and reduce the resource occupation of a Central Processing Unit (CPU), thereby generating a picture frame layer corresponding to the input event.
In some embodiments, the terminal may obtain the target picture frame layer from the buffer frame queue through the picture composition unit by: acquiring a first-in first-out sequence corresponding to each picture frame layer in a cache frame queue; and acquiring a target picture frame layer from the buffer frame queue through the picture synthesis unit according to the first-in first-out sequence.
Here, the buffer frame queue follows a first-in first-out processing order. When the target picture frame layer is obtained from the cache frame queue through the picture synthesis unit, the picture synthesis unit obtains the first-in first-out sequence corresponding to each picture frame layer in the cache frame queue, and obtains the target picture frame layer from the cache frame queue according to the first-in first-out sequence. As an example, the Frame layers Frame #1, Frame #2, and Frame #3 are stored in the buffer Frame queue in the writing order, and the obtaining order of the Frame layers in the buffer Frame queue by the picture synthesis unit is Frame #1, Frame #2, and Frame #3 in sequence.
By applying the embodiment of the application, the maximum processing time length and the picture refreshing time interval required by the application program corresponding to each level of picture processing unit are obtained, and further, the queue length of the buffer frame queue required by the application program to output the target picture frame and the target picture processing time length corresponding to each level of picture processing unit are determined based on the maximum processing time length and the picture refreshing time interval corresponding to each level of picture processing unit, so that the target picture frame is output based on the determined queue length and the target picture processing time length corresponding to each level of picture processing unit;
the required queue length of the buffer frame queue and the target picture processing duration corresponding to each stage of picture processing unit are determined based on the maximum processing duration and the picture refreshing time interval required by the application program corresponding to each stage of picture processing unit to output the picture frames, so that each picture frame can be ensured to be output in the set refreshing time interval, the possibility of picture frame clamping is reduced, and the smoothness of the application program pictures is improved.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below.
In practical applications, the operating system (such as an android system) generally includes a three-buffer frame queue (i.e., the queue length of the buffer frame queue is three buffer frames) and a four-level picture processing unit (i.e., the above-mentioned each-level picture processing unit). The vertical synchronization is used as a reference, all levels of picture processing units are triggered at fixed time interval points, and a buffer frame is locked by the picture processing units in the process of performing picture rendering to screen display, so that the concurrent execution of all levels of picture processing units needs to be matched by a buffer frame queue. Here, the basic time unit of the picture processing unit is HWVSyncTime, i.e. the refresh Time interval of the display screen of the terminal, and the default allocation of one picture processing unit Time (i.e. the picture processing duration corresponding to the above picture processing units at each level) for each level of picture processing unit, for example, HW may be usedVSyncTime is one picture processing unit Time.
Referring to fig. 4, fig. 4 is a schematic processing flow diagram of a three-buffer frame queue and a four-level picture processing unit according to an embodiment of the present application. Here, the process of inputting an input event (e.g., a click event) to the display screen is specifically as follows: input capture (Input), application computing and graphics rendering (APP), image Composition (Composition), and Display (Display).
The duration of the picture processing of each picture processing unit is related to the specific application program scene and the system operation. For example, under a fixed triple-buffer frame queue and four-stage frame processing units, when the operation of one stage of frame processing unit cannot be completed in time, the stage of frame processing unit passively increases J (J1, 2, 3.) frame processing unit time until the corresponding operation is completed. However, the concurrent execution of the picture processing units is also limited by the queue length of the buffered frame queue, whereas the triple buffered frame queue can only allow three four-level system pipelines to be concurrent. Therefore, when the time of the image processing unit corresponding to a certain image processing unit is excessively prolonged, continuous frame clamping occurs because no buffer frame is available, so that the next image processing unit cannot be started, and the frame rate of the image is reduced, thereby causing serious influence on user experience.
Based on this, embodiments of the present application provide a method for processing a screen of an application program, so as to solve at least the above existing problems. According to the picture processing method of the application program, the processing time length of relevant calculation, graphic rendering and system operation required to be executed by the application program to output the picture frame is measured, and the processing time length required by each stage of picture processing unit is matched when the application program outputs the picture frame according to the measurement, so that the length of the buffer frame queue and the pipeline processing time corresponding to the picture processing unit are adaptively adjusted, the application program can be ensured to finish outputting one frame of picture within the set time of the system pipeline, the picture card frame probability is reduced, the picture frame rate is improved, and the system optimization of the picture smoothness of the application program is realized.
The method for processing the application program picture can be applied to various intelligent devices, especially heavy load application or high refresh rate screen devices, and can obviously improve the picture fluency of the application program.
The method for processing the screen of the application program provided by the embodiment of the application program can be realized by a screen processing unit timer, a buffer frame queue and a screen processing unit controller, wherein the screen processing unit timer, the buffer frame queue and the screen processing unit controller are integrated in an operating system.
Referring to fig. 5, fig. 5 is a schematic flowchart of a screen processing method of an application according to an embodiment of the present application. Here, the process of inputting an input event (e.g., a click event) to the display screen is specifically as follows: an Input acquisition unit (Input) acquires an Input event and sends the Input event to an application program computing and graphic rendering unit (APP); the application program calculation and graphic rendering unit adds the generated buffer frame into a buffer frame queue; an image synthesis unit SurfaceFlinger (composition) acquires a buffer frame from the buffer frame queue and carries out picture synthesis; the synthesized picture frame is sent to a picture Display unit (Display) to be displayed. The processing time of each stage of picture processing unit is measured through a picture processing unit Timer (PT), the processing time is output, the picture processing unit time of each stage of picture processing unit and the queue length of a buffer frame queue are adjusted through a buffer frame queue and picture processing unit Controller (BPC) based on the measured processing time of each stage of picture processing unit, and therefore picture output and display are conducted based on the Pipeline processing time of each stage of picture processing unit and the queue length of the buffer frame queue after adjustment.
First, a picture processing unit Timer (PT: Pipeline Timer) is used to perform program instrumentation on key function operations in the correlation calculation, graphics rendering, and system operations that need to be performed to output picture frames by an application program, so as to measure the processing time of each stage of picture processing unit, thereby outputting the measured processing time of each stage of picture processing unit. The processing duration of each stage of the image processing unit obtained by the measurement is used for guiding the adjustment strategies of the buffer frame queue and the image processing unit controller.
Taking an operating system as an Android system as an example, referring to fig. 6, fig. 6 is a schematic processing flow diagram of a picture processing unit timer provided in an embodiment of the present application. Here, the Android system includes four levels of picture processing units, and implements measurement of processing time duration corresponding to each level of picture processing unit by operating an instrumented Hook function (specifically, Hook Timestamp) in a key function of each level of picture processing unit. The picture processing units at all levelsThe corresponding processing time lengths are respectively T of input acquisition time consumptionInputApplication program calculating and rendering time consumption TAPP(UI thread->Render thread->GPU render), time-consuming T of frame compositionSF(Surface flicker) and screen display time TDISP(Display)。
Secondly, the picture processing unit timer can know the processing time of each picture processing unit involved in the correlation calculation, the graphic rendering and the system operation required by the application program to output the picture, for example, the operation time consumption of a certain stage of picture processing unit frequently exceeds the time of the stage of picture processing unit, at this time, a buffer frame queue and picture processing unit Controller (BPC: buffer queue & picture Controller) can be used for prolonging the pipeline processing time of the stage of picture processing unit, and increasing the queue length of the buffer frame queue to fully calculate the time consumption, thereby ensuring that the application program can finish outputting one frame of image in a pipeline process, reducing the picture pause probability and improving the picture frame rate.
In the embodiment of the present application, the picture processing unit time of each level of picture processing unit and the queue length of the buffer frame queue may be adjusted based on the measured processing time of each level of picture processing unit by the following formula:
PLUTi-Max=Max(ti-0,ti-1,ti-2,…ti-k)k=1,2,3,4…60;
PLUDi=(Floor(PLUTi-Max/HWVSyncTime)+1)*HWVSyncTime;
BufferQueueLen=Floor((PLUD1+PLUD2+PLUD3+...PLUDi)/HWVSyncTime);
wherein, PLUTI-MaxMax () is a function of solving the maximum value, which is the maximum processing time measured by 60 operations in the i-th level picture processing unit; PLUDiThe corresponding adjustment duration (namely the calculated target picture processing duration) of the ith-level picture processing unit; floor () is a rounding function; HW (HW) powerVSyncThe Time is the refresh Time interval of the display screen; BufferQueuelen is a team that buffers a queue of framesThe column length.
For example, taking the three-buffer frame queue and the four-level frame processing unit as examples, for the case that the processing time consumption of the application program operation and graphics rendering unit often slightly exceeds the time of the one-level frame processing unit, based on the adjustment policy formula provided in the above embodiment, the controller will increase the time of one frame processing unit from the time of one frame processing unit for the processing time of the application program operation and graphics rendering unit, and extend the queue length of the buffer frame queue, i.e., increase one buffer frame, to meet the processing time consumption of the application program operation and rendering unit, thereby ensuring that the application program can complete one frame of image output in a pipeline process, reducing the frame stutter probability, and improving the game frame rate. Referring to fig. 7, fig. 7 is a schematic processing flow diagram of a four-buffer frame queue and a four-level picture processing unit according to an embodiment of the present disclosure, where, compared to fig. 4, the length of the buffer frame queue is increased from three buffer frames to four buffer frames, and the processing duration of the application operation and graphics rendering unit is increased from one pipeline time unit to 2 pipeline time units.
Taking an operating system as an Android system as an example, firstly, analyzing related calculation, graphics rendering and system operation steps required by an application program to output a picture frame, and further explaining the action and adjustment process of a buffer frame queue and a picture processing unit controller by combining an actual example on the basis.
The Android system is a three-buffer frame queue and a four-level picture processing unit, and the process from input of an input event to display of a picture on an output screen specifically comprises the following steps: the input acquisition unit captures an input event and sends the input event to an application program; application-in-application vertical synchronization signal (APP)VSync) Start logic and rendering related work (including UI thread) at the time of arrival>Render thread->GPU render) and submit the Frame of the buffer (such as Frame #1) to the queue of buffer frames (i.e., bufferfeque) for management; when synthesizing the vertical synchronization Signal (SF)VSync) When coming, the surface flag (corresponding picture Composition) is buffered from the buffer frame queue in the order of first-in first-outTaking out the buffer frame, synthesizing the buffer frames of all the application programs obtained currently, and processing the next hardware vertical synchronous signal (HW)VSyn c) When the frame comes, the synthesized picture frame is sent to a screen (namely, a picture Display unit Display) for Display.
Taking light operation load application as an example, behaviors of a buffer frame queue and a picture processing unit of an Android system are as shown in fig. 8, and fig. 8 is a schematic processing flow diagram of a three-buffer frame queue and a four-level picture processing unit provided in an embodiment of the present application. Here, the input acquisition unit captures an input event and transmits the input event to the application program; application-in-application vertical synchronization signal (APP)VSync) Start logic and rendering related work (including UI thread) at the time of arrival>Render thread->GPU render), and submit the Frame of the buffer (such as Frame #1) to the queue of buffer Frame (i.e. bufferfluee) for management, the queue length of the queue of buffer Frame is three buffer frames; when synthesizing the vertical synchronization Signal (SF)VSync) When the frame comes, the surface flag (corresponding to the picture Composition unit) takes out the buffer frames from the buffer frame queue in the order of first-in first-out, synthesizes the currently acquired buffer frames of all the applications, and then sends the synthesized buffer frames to the next hardware vertical synchronization signal (HW)VSync) When the frame comes, the synthesized picture frame is sent to a screen (namely, a picture Display unit Display) for Display. For light operation load application, although the processing time consumption of the application program calculation and rendering operation unit fluctuates, the overload condition (namely the condition that the time exceeds the time of the picture processing unit) does not occur, and the picture output can still be completed in a pipeline flow.
Taking the heavy operation load application as an example, the behaviors of the buffer frame queue and the picture processing unit of the Android system are as shown in fig. 9, and fig. 9 is a schematic processing flow diagram of a three-buffer frame queue and a four-level picture processing unit provided in the embodiment of the present application. Here, the input acquisition unit captures an input event and transmits the input event to the application program; application-in-application vertical synchronization signal (APP)VSync) Start logic and rendering related work (including UI thread) at the time of arrival>Render thread->GPU render) and submitting a buffer Frame (such as Frame #1) to a buffer Frame queue (namely bufferfluee) for management, wherein the queue length of the buffer Frame queue is four buffer frames; when synthesizing the vertical synchronization Signal (SF)VSync) When the frame comes, the surface flag (corresponding to the picture Composition unit) takes out the buffer frames from the buffer frame queue in the order of first-in first-out, synthesizes the currently acquired buffer frames of all the applications, and then sends the synthesized buffer frames to the next hardware vertical synchronization signal (HW)VSync) When the frame comes, the synthesized picture frame is sent to a screen (namely, a picture Display unit Display) for Display. For heavy operation load application, the processing time of the application program calculation and rendering operation unit is long, the processing cannot be completed in time within one picture processing unit time, and the application program cannot catch up with image synthesis, so that a card Frame (Jank) appears, namely when the Frame #2 is output and displayed, the Jank situation appears, and at the moment, the Frame #2 is not completed within one picture processing unit time, so that the picture card Frame is on the Frame # 1.
Based on the method provided in the foregoing embodiment, the queue length of the buffer frame queue and the processing time of the image processing unit under the application of the heavy operation load are adjusted, as shown in fig. 10, fig. 10 is a schematic processing flow diagram of a four-buffer frame queue and a four-level image processing unit provided in the embodiment of the present application. Here, for the heavy operation load application program, since the time consumed by the application program calculation and the graphics rendering unit frequently exceeds the time of the image processing unit at the stage, according to the adjustment equation, the time of the image processing unit is prolonged for the application program calculation and rendering operation unit, and one buffer frame is added to the buffer frame queue, namely, the three buffer frame queue is added to the four buffer frame queue. In particular by modifying the presentation time stamp of the Buffer to an SFVSyncThereafter, the last surfaceFlinger synthesizing action is skipped, i.e. the vertical signal SF is synthesized at the last timeVSyncWhen the frame arrives, the screen synthesis unit SurfaceFlinger can not acquire the Buffer frame (the display time stamp of the Buffer is modified to be the SFVsyncThereafter), the buffered frame is synthesized with the vertical signal SF at the next timeVSyncWhen the application arrives, the SurfaceFlinger can successfully acquire the application and execute the picture synthesis operation to give the applicationThe program calculation and rendering operation unit prolongs the time of a picture processing unit and adds a buffer frame to the buffer frame queue so as to meet the operation and rendering time consumption of an application program, improve the concurrency of a system pipeline, reduce the probability of picture pause and improve the frame rate.
Specifically, an input acquisition unit captures an input event and sends the input event to an application program; application-in-application vertical synchronization signal (APP)VSync) Start logic and rendering related work (including UI thread) at the time of arrival>Render thread->GPU render), and submit the Frame of the buffer (such as Frame #1) to the queue of buffer Frame (i.e. bufferfluee) for management, the queue length of the queue of buffer Frame is four buffer frames; when synthesizing the vertical synchronization Signal (SF)VSync) When the frame comes, the surfefinger (corresponding to the picture Composition unit) takes out the buffer frames from the buffer frame queue in the order of first-in first-out, synthesizes the currently acquired buffer frames of all the applications, and then receives the next hardware vertical synchronization signal (HW)VSync) When the frame comes, the synthesized picture frame is sent to a screen (namely, a picture Display unit Display) for Display.
Based on the above embodiment, indexes such as average frame rate AvgFPS, frame rate stability, frame rate variance VarFPS, and card frames (including Jank and BigJank) of the application program are all optimized significantly, and the comparison data of the fluency of the mobile phone terminal (before and after optimization) is shown in fig. 11, where fig. 11 is a schematic diagram of the comparison data of the fluency of the mobile phone terminal provided in the embodiment of the present application.
By applying the embodiment of the application, the processing time required by the application to output the picture frame, the processing time required by the graphic rendering and the system operation is measured, and the processing time required by each level of picture processing unit is matched according to the measurement, so that the length of the buffer frame queue and the pipeline processing time corresponding to the picture processing unit are adaptively adjusted, the application can be ensured to finish the output of one frame of picture within the set time of the system pipeline, the picture card frame probability is reduced, the picture frame rate is improved, and the system optimization of the picture fluency of the application is realized.
Continuing with the exemplary structure of the application program screen processing device 555 implemented as a software module provided in the embodiments of the present application, in some embodiments, as shown in fig. 2, the software module stored in the application program screen processing device 555 in the memory 550 may include:
an obtaining module 5551, configured to obtain a picture processing duration corresponding to each stage of picture processing unit of an application program and a picture refreshing time interval of the application program;
the picture processing duration is the maximum processing duration required by the corresponding picture processing unit for outputting the picture frame corresponding to the application program;
a determining module 5552, configured to determine, based on the picture processing duration corresponding to each level of the picture processing unit and the picture refreshing time interval, a queue length of a buffer frame queue required by the application to output a target picture frame, and a target picture processing duration corresponding to each level of the picture processing unit;
an output module 5553, configured to output a target picture frame through each stage of the picture processing unit based on the queue length and the target picture processing duration corresponding to each stage of the picture processing unit.
In some embodiments, the obtaining module 5551 is further configured to perform the following processing for each level of the screen processing unit of the application program respectively:
acquiring the target times of processing time required by the picture processing unit to output the picture frame corresponding to the application program;
and taking the maximum processing time length in the acquired processing time lengths of the target times as the picture processing time length corresponding to the picture processing unit.
In some embodiments, the obtaining module 5551 is further configured to perform the following for each acquisition of the target number:
monitoring a first time point when a sub application program corresponding to the picture processing unit runs and a second time point when the sub application program corresponding to a next-stage picture processing unit of the picture processing unit runs by aiming at a non-last-stage picture processing unit through an objective function;
determining and collecting the processing duration corresponding to the non-last-stage picture processing unit based on the first time point and the second time point;
monitoring a third time point when the sub application program corresponding to the picture processing unit runs and a fourth time point when the sub application program corresponding to the picture processing unit finishes running by aiming at the picture processing unit at the last stage through the target function;
and determining and collecting the processing time length corresponding to the last-stage picture processing unit based on the third time point and the fourth time point.
In some embodiments, the obtaining module 5551 is further configured to mount the target function to a target position in the application program corresponding to each level of the picture processing unit in a program instrumentation manner, so as to monitor, by using the target function, a time point when the sub application program corresponding to each level of the picture processing unit runs and a time point when the running is finished based on the target position;
the target position is used for indicating the starting position of the sub-application program corresponding to each level of picture processing unit in the application program and indicating the ending position of the sub-application program corresponding to the last level of picture processing unit.
In some embodiments, the determining module 5552 is further configured to determine a target picture processing duration corresponding to the corresponding picture processing unit based on the picture processing durations corresponding to the picture processing units at the respective levels and the picture refreshing time interval;
summing the target picture processing duration corresponding to each stage of the picture processing unit to obtain a summation result, and
and determining the queue length of a buffer frame queue required by the application program to output the target picture frame based on the summation result and the picture refreshing time interval.
In some embodiments, the application has an initial queue length corresponding to a buffer frame queue, and each level of the picture processing units has a corresponding initial picture processing duration; the output module 5553 updates the initial queue length to the queue length, and updates the initial image processing duration corresponding to each stage of the image processing unit to the corresponding target image processing duration;
and outputting a target picture frame through each stage of picture processing units based on the updated initial queue length and the initial picture processing duration corresponding to each stage of picture processing units.
In some embodiments, the respective stages of picture processing units include: the device comprises an input acquisition unit, a calculation and rendering unit, a picture synthesis unit and a picture display unit; the output module 5553 is further configured to send, when the input acquisition unit detects an input event, the input event to the calculation and rendering unit within a corresponding target screen processing duration;
when the calculating and rendering unit receives the input event and detects a first trigger signal, generating a picture frame layer corresponding to the input event within a corresponding target picture processing duration, and adding the picture frame layer into a buffer frame queue with the queue length;
when the picture synthesis unit detects a second trigger signal, within the corresponding target picture processing duration, obtaining a target picture frame layer from the cache frame queue, performing layer synthesis processing on the target picture frame layer to obtain the target picture frame, and performing layer synthesis processing on the target picture frame layer to obtain the target picture frame
When the picture synthesis unit detects a third trigger signal, the target picture frame is sent to the picture display unit;
and outputting the target picture frame through the picture display unit within the target picture processing duration corresponding to the picture display unit.
In some embodiments, the calculation and rendering unit includes an interface thread and a rendering thread, and the output module 5553 is further configured to generate, by the interface thread, a picture drawing instruction based on the acquired picture data within the corresponding target picture processing duration when the calculation and rendering unit detects the first trigger signal;
and responding to the picture drawing instruction, calling a graphic processing unit to render the picture data through the rendering thread, and generating a picture frame layer corresponding to the input event.
In some embodiments, the output module 5553 is further configured to obtain a first-in first-out sequence corresponding to each picture frame layer in the buffer frame queue;
and acquiring a target picture frame layer from the buffer frame queue through the picture synthesis unit according to the first-in first-out sequence.
In some embodiments, the apparatus further comprises:
the storage module is used for storing the picture processing duration corresponding to each stage of picture processing unit of the application program to the block chain network;
the acquiring of the picture processing duration corresponding to each stage of picture processing unit of the application program includes:
generating and sending a transaction for acquiring the picture processing duration corresponding to each level of picture processing unit in the block chain network;
and receiving the picture processing duration corresponding to each level of picture processing units returned by the block chain network based on the transaction.
In some embodiments, each level of the picture processing unit of the application has a corresponding initial picture processing duration; the obtaining module 5551 is further configured to obtain a frequency that the historical image processing duration of each level of image processing unit exceeds the initial image processing duration;
determining the picture processing unit of which the frequency reaches the frequency threshold value in each level of picture processing units as a target picture processing unit;
and acquiring the picture processing duration corresponding to the target picture processing unit.
In some embodiments, the obtaining module 5551 is further configured to perform pause time length detection on a plurality of picture frames output by the application program and within a target time length, and determine a total pause time length corresponding to the plurality of picture frames;
determining a pause score corresponding to the application program based on the total pause time length and the target time length;
and when the stuck score reaches a stuck score threshold value, acquiring the picture processing duration corresponding to each level of picture processing unit of the application program.
In some embodiments, the obtaining module 5551 is further configured to obtain an actual picture processing duration required by the application to output each picture frame, and a standard picture processing duration corresponding to the application to output the picture frame;
determining the pause time length corresponding to each picture frame based on the actual picture processing time length required by each picture frame and the standard picture processing time length;
and determining the total pause time length corresponding to the plurality of picture frames based on the pause time length corresponding to each picture frame.
By applying the embodiment of the application, the maximum processing time length and the picture refreshing time interval required by the application program corresponding to each level of picture processing unit are obtained, and further, the queue length of the buffer frame queue required by the application program to output the target picture frame and the target picture processing time length corresponding to each level of picture processing unit are determined based on the maximum processing time length and the picture refreshing time interval corresponding to each level of picture processing unit, so that the target picture frame is output based on the determined queue length and the target picture processing time length corresponding to each level of picture processing unit;
the required queue length of the buffer frame queue and the target picture processing duration corresponding to each stage of picture processing unit are determined based on the maximum processing duration and the picture refreshing time interval required by the application program corresponding to each stage of picture processing unit to output the picture frames, so that each picture frame can be ensured to be output in the set refreshing time interval, the possibility of picture frame clamping is reduced, and the smoothness of the application program pictures is improved.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the picture processing method of the application program provided by the embodiment of the application program when the executable instruction stored in the memory is executed.
Embodiments of the present application also provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the screen processing method of the application program provided by the embodiment of the application program.
The embodiment of the present application further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the method for processing the screen of the application program provided by the embodiment of the present application is implemented.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.