Animation playing method and device, electronic equipment and computer readable storage medium
1. An animation playing method, characterized in that the method comprises:
acquiring an attribute description file of a target animation, wherein the attribute description file comprises attribute information of animation effects in a plurality of image frames of the target animation;
analyzing the attribute description file to obtain attribute information of animation effects of key frames in the plurality of image frames, wherein the key frames are image frames corresponding to the animation effects in the plurality of image frames;
forming an attribute information group of the animation effect of the key frames in the plurality of image frames based on the attribute information of the animation effect of the key frames in the plurality of image frames, wherein the attribute information group comprises the attribute information of the animation effect in the key frames and the display time corresponding to each key frame;
and rendering the target image elements based on the attribute information group so as to play the target animation.
2. The method according to claim 1, wherein said parsing the attribute description file to obtain attribute information of animation effect of key frames in the plurality of image frames comprises:
calling a management animation inlet class, and calling a packaging animation component class through the management animation inlet class, wherein the management animation inlet class is used for providing an animation management inlet, and the packaging animation component class is used for packaging the acquired data;
and calling an animation resource acquisition class through the packaging animation component class, and analyzing the attribute description file through the animation resource acquisition class to acquire the attribute information of the animation effect of the key frames in the plurality of image frames.
3. The method according to claim 2, wherein the parsing the attribute description file through the animation resource obtaining class to obtain attribute information of animation effects of key frames in the plurality of image frames comprises:
calling an outermost layer data analysis class through the animation resource acquisition class, and acquiring the frame rate of animation effects in the plurality of image frames through the outermost layer data analysis class;
and calling a layer key frame data analysis class, and acquiring the attribute information of the animation effect of the key frames in the plurality of image frames through the layer key frame data analysis class.
4. The method according to claim 3, wherein the obtaining attribute information of animation effect of the keyframes in the plurality of image frames through the layer keyframe data parsing class comprises:
calling an attribute conversion data analysis class through the layer key frame data analysis class, calling an attribute analysis base class through the attribute conversion data analysis class, and acquiring attribute information of animation effects of key frames in the plurality of image frames from the attribute description file through the attribute analysis base class;
the attribute conversion data analysis class is used for providing a data conversion inlet, and the attribute analysis base class is used for providing a method class for analyzing the attribute description file.
5. The method according to claim 4, wherein the obtaining attribute information of animation effect of the key frame in the plurality of image frames from the attribute description file through the attribute parsing base class includes at least one of:
calling a transparency attribute analysis class through the attribute analysis base class, and acquiring transparency attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the transparency attribute analysis class;
calling a displacement attribute analysis class through the attribute analysis base class, and acquiring displacement attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the displacement attribute analysis class;
calling a rotation attribute analysis class through the attribute analysis base class, and acquiring rotation attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the rotation attribute analysis class;
and calling a zooming attribute analysis class through the attribute analysis base class, and acquiring zooming attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the zooming attribute analysis class.
6. The method according to claim 3, wherein said composing the set of attribute information of the animation effect of the key frame in the plurality of image frames based on the attribute information of the animation effect of the key frame in the plurality of image frames comprises:
for any key frame, the attribute information of the animation effect in the key frame is combined into an initial attribute information group corresponding to the key frame;
and determining an attribute information group of the animation effect of the key frames in the plurality of image frames based on the frame rate of the animation effect in the plurality of image frames and the initial attribute information group corresponding to each key frame.
7. The method of claim 6, wherein determining the set of attribute information for the animation effect of the keyframes in the plurality of image frames based on the frame rates of the animation effect in the plurality of image frames and the initial set of attribute information corresponding to each keyframe comprises:
determining display time corresponding to each key frame based on the frame rate of the animation effect in the plurality of image frames;
and replacing the frame index corresponding to each key frame in the initial attribute information group with the display time corresponding to each key frame to obtain the attribute information group of the animation effect of the key frames in the plurality of image frames.
8. An animation playback apparatus, comprising:
an acquisition unit configured to perform acquisition of an attribute description file of a target animation, the attribute description file including attribute information of an animation effect in a plurality of image frames of the target animation;
the analysis unit is configured to analyze the attribute description file to obtain attribute information of animation effects of key frames in the plurality of image frames, wherein the key frames are the image frames corresponding to the animation effects in the plurality of image frames;
a composition unit configured to perform composition of an attribute information group of an animation effect of a key frame in the plurality of image frames based on attribute information of the animation effect of the key frame in the plurality of image frames, the attribute information group including attribute information of the animation effect in the key frame and a display time corresponding to each key frame;
a rendering unit configured to perform rendering of a target image element based on the attribute information set to play the target animation.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the animation playback method of any one of claims 1 to 7.
10. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the animation playback method of any one of claims 1 to 7.
Background
The attribute animation is an animation form which can expand animation action objects and increase animation action effects, animation effects can be added to various elements in the interface, and the animation production flexibility is improved.
At present, when animation effects are added to elements in an interface, animation designers can only use design software to design the animation effects, then word descriptions are output based on the designed animation effects, developers write codes to achieve corresponding animation effects based on the word descriptions provided by the animation designers, and therefore human-computer interaction efficiency is low, and animation production efficiency is low.
Disclosure of Invention
The disclosure provides an animation playing method, an animation playing device, electronic equipment and a computer readable storage medium, which are used for improving human-computer interaction efficiency in an animation production process so as to improve animation production efficiency. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an animation playing method, including:
acquiring an attribute description file of a target animation, wherein the attribute description file comprises attribute information of animation effects in a plurality of image frames of the target animation;
analyzing the attribute description file to obtain attribute information of animation effects of key frames in the plurality of image frames, wherein the key frames are the image frames corresponding to the animation effects in the plurality of image frames;
forming an attribute information group of the animation effect of the key frames in the plurality of image frames based on the attribute information of the animation effect of the key frames in the plurality of image frames, wherein the attribute information group comprises the attribute information of the animation effect in the key frames and the display time corresponding to each key frame;
and rendering the target image element based on the attribute information group so as to play the target animation.
According to the method and the device, after the attribute description file of the target animation is obtained, the attribute description file is analyzed to obtain the attribute information of the animation effect of the key frame in the plurality of image frames of the target animation, and then the attribute information of the animation effect of the key frame is formed based on the attribute information of the animation effect of the key frame, so that the target image element is rendered based on the attribute information group, the rendering of the target animation is realized, the target animation can be played, a code does not need to be written manually by a developer, the human-computer interaction efficiency is improved, and the animation production efficiency is further improved.
In some embodiments, parsing the attribute description file to obtain attribute information of animation effect of the key frames in the plurality of image frames includes:
calling a management animation inlet class, and calling a packaging animation component class through the management animation inlet class, wherein the management animation inlet class is used for providing an animation management inlet, and the packaging animation component class is used for packaging the acquired data;
and calling an animation resource acquisition class through the package animation component class, analyzing the attribute description file through the animation resource acquisition class, and acquiring the attribute information of the animation effect of the key frames in the plurality of image frames.
The method comprises the steps of calling a management entry class to call a packaging animation assembly class associated with the management entry class, and calling an animation resource acquisition class associated with the packaging animation assembly class through the packaging animation assembly class, so that the attribute information of the animation effect of key frames in a plurality of image frames is acquired through the animation resource acquisition class, the acquisition of the attribute information is realized, the manual operation of developers is not needed, the human-computer interaction efficiency is improved, and the acquisition efficiency of the attribute information is improved.
In some embodiments, parsing the attribute description file through the animation resource obtaining class to obtain attribute information of animation effects of key frames in the plurality of image frames includes:
calling an outermost layer data analysis class through the animation resource acquisition class, and acquiring the frame rate of animation effects in the plurality of image frames through the outermost layer data analysis class;
and calling a layer key frame data analysis class, and acquiring the attribute information of the animation effect of the key frames in the plurality of image frames through the layer key frame data analysis class.
In the process of acquiring the attribute information through the animation resource acquisition class, the outermost layer data analysis class serving as the subclass of the animation resource acquisition class is called, so that the frame rate of animation effects in a plurality of image frames is acquired through the outermost layer data analysis class, and then the layer key frame data analysis class serving as the subclass of the animation resource acquisition class is called, so that the attribute information of the animation effects of the key frames in the plurality of image frames is acquired through the layer key frame data analysis class, manual operation of developers is not needed, the human-computer interaction efficiency is improved, and the acquisition efficiency of the frame rate and the attribute information is improved.
In some embodiments, obtaining attribute information of an animation effect of a keyframe in the plurality of image frames through the layer of keyframe data parsing class includes:
calling an attribute conversion data analysis class through the key frame data analysis class of the layer, calling an attribute analysis base class through the attribute conversion data analysis class, and acquiring attribute information of animation effects of key frames in the plurality of image frames from the attribute description file through the attribute analysis base class;
the attribute conversion data analysis class is used for providing a data conversion entry, and the attribute analysis base class is used for providing a method class for analyzing the attribute description file.
In the process of acquiring the attribute information through the layer key frame data analysis class, the attribute conversion data analysis class serving as the subclass of the layer key frame data analysis class is called, so that the attribute analysis base class serving as the subclass of the attribute conversion data analysis class is called through the attribute conversion data analysis class, and then the attribute information of the animation effect of the key frames in the plurality of image frames is acquired through the attribute analysis base class.
In some embodiments, an attribute parsing base class is called, and attribute information of animation effects in the plurality of image frames is obtained from the format-converted attribute description file through the attribute parsing base class, where the attribute information includes at least one of:
calling a transparency attribute analysis class through the attribute analysis base class, and acquiring transparency attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the transparency attribute analysis class;
calling a displacement attribute analysis class through the attribute analysis base class, and acquiring displacement attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the displacement attribute analysis class;
calling a rotation attribute analysis class through the attribute analysis base class, and acquiring rotation attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the rotation attribute analysis class;
and calling a zooming attribute analysis class through the attribute analysis base class, and acquiring zooming attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the zooming attribute analysis class.
By providing the attribute analysis classes for acquiring the corresponding attribute information, the corresponding attribute information can be acquired through the attribute analysis classes, the attribute description file can be analyzed, the manual operation of developers is not needed, the human-computer interaction efficiency is improved, and the acquisition efficiency of the attribute information is improved.
In some embodiments, composing the set of attribute information of the animation effect of the key frame in the plurality of image frames based on the attribute information of the animation effect of the key frame in the plurality of image frames comprises:
for any key frame, the attribute information of the animation effect in the key frame is combined into an initial attribute information group corresponding to the key frame;
and determining an attribute information group of the animation effect of the key frames in the plurality of image frames based on the frame rate of the animation effect in the plurality of image frames and the initial attribute information group corresponding to each key frame.
After the initial attribute information group corresponding to any key frame is obtained, the attribute information group is determined by combining the frame rates of animation effects in a plurality of image frames, so that the finally determined attribute information group can be read by the electronic equipment, the electronic equipment can obtain attribute information from the attribute information group by itself to render target animation, manual operation of developers is not needed, the man-machine interaction efficiency is improved, and the animation production efficiency is improved.
In some embodiments, determining the attribute information set of the animation effect of the key frames in the plurality of image frames based on the frame rates of the animation effect in the plurality of image frames and the initial attribute information sets corresponding to the respective key frames comprises:
determining display time corresponding to each key frame based on the frame rate of the animation effect in the plurality of image frames;
and replacing the frame index corresponding to each key frame in the initial attribute information group with the display time corresponding to each key frame to obtain the attribute information group of the animation effect of the key frames in the plurality of image frames.
The frame indexes of all key frames in the initial attribute information group are replaced by the display time of all key frames determined based on the frame rate, so that the attribute information group which can be identified by the electronic equipment is obtained, the electronic equipment can obtain the attribute information from the attribute information group, and then the target animation is rendered based on the attribute information, the manual operation of developers is not needed, the rendering of the target animation can be realized, the human-computer interaction efficiency is improved, and the animation manufacturing efficiency is improved.
In some embodiments, rendering the target image element based on the property information set to play the target animation includes:
associating the attribute information group corresponding to each key frame with the corresponding target image element to obtain an attribute animation list, wherein the attribute animation list comprises attribute information groups corresponding to a plurality of target image elements;
rendering the target image element based on the attribute animation list to play the target animation.
And associating the attribute information groups to corresponding target image elements to obtain an attribute animation list comprising the target image elements and attribute information corresponding to the target image elements, so that the target image elements are directly rendered based on the attribute animation list to realize the rendering of the target animation.
In some embodiments, after rendering the target image element based on the property information set to play the target animation, the method further comprises:
calling a management animation entry class, and acquiring the target image element through the management animation entry class;
and calling an animation playing function, and playing the target animation through the animation playing function.
By calling the management animation inlet and further by means of the method function in the management animation inlet, the target image elements are obtained, the target animation is played, manual operation of developers is not needed, animation playing is achieved, human-computer interaction efficiency is improved, and accordingly animation playing efficiency is improved.
According to a second aspect of the embodiments of the present disclosure, there is provided an animation playback device including:
an acquisition unit configured to perform acquisition of an attribute description file of a target animation, the attribute description file including attribute information of an animation effect in a plurality of image frames of the target animation;
the analysis unit is configured to analyze the attribute description file to obtain attribute information of animation effects of key frames in the image frames, wherein the key frames are the image frames corresponding to the animation effects in the image frames;
a composition unit configured to perform composition of an attribute information group of an animation effect of a key frame in the plurality of image frames based on attribute information of the animation effect of the key frame in the plurality of image frames, the attribute information group including attribute information of the animation effect in the key frame and a display time corresponding to each key frame;
and the rendering unit is configured to render the target image element based on the attribute information group so as to play the target animation.
In some embodiments, the parsing unit is configured to execute invoking a management animation entry class, and invoke a package animation component class through the management animation entry class, wherein the management animation entry class is used for providing an animation management entry, and the package animation component class is used for packaging the acquired data;
the analysis unit is also configured to execute calling an animation resource acquisition class through the package animation component class, and analyze the attribute description file through the animation resource acquisition class to acquire attribute information of animation effects of key frames in the plurality of image frames.
In some embodiments, the parsing unit includes a first acquisition subunit and a second acquisition subunit;
the first obtaining subunit is configured to execute the animation resource obtaining class, call an outermost layer data parsing class, and obtain, through the outermost layer data parsing class, a frame rate of an animation effect in the plurality of image frames;
the second obtaining subunit is configured to execute a calling layer key frame data parsing class, and obtain attribute information of an animation effect of a key frame in the plurality of image frames through the layer key frame data parsing class.
In some embodiments, the second fetch subunit includes a calling submodule and a fetch submodule;
the calling submodule is configured to execute the analysis class of the key frame data of the layer, call the analysis class of the attribute conversion data, and call the attribute analysis base class by the analysis class of the attribute conversion data;
the obtaining sub-module is configured to execute obtaining attribute information of animation effect of the key frames in the plurality of image frames from the attribute description file through the attribute analysis base class;
the attribute conversion data analysis class is used for providing a data conversion entry, and the attribute analysis base class is used for providing a method class for analyzing the attribute description file.
In some embodiments, the acquisition submodule is configured to perform at least one of:
calling a transparency attribute analysis class through the attribute analysis base class, and acquiring transparency attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the transparency attribute analysis class;
calling a displacement attribute analysis class through the attribute analysis base class, and acquiring displacement attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the displacement attribute analysis class;
calling a rotation attribute analysis class through the attribute analysis base class, and acquiring rotation attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the rotation attribute analysis class;
and calling a zooming attribute analysis class through the attribute analysis base class, and acquiring zooming attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the zooming attribute analysis class.
In some embodiments, the composition unit comprises a composition subunit and a determination subunit;
the composition subunit is configured to perform, for any key frame, composition of attribute information of an animation effect in the any key frame into an initial attribute information group corresponding to the any key frame;
the determining subunit is configured to perform determining an attribute information set of animation effects of key frames in the plurality of image frames based on the frame rates of the animation effects in the plurality of image frames and the initial attribute information sets corresponding to the respective key frames.
In some embodiments, the determining subunit is configured to perform determining, based on the frame rates of the animation effects in the plurality of image frames, display times corresponding to the respective keyframes; and replacing the frame index corresponding to each key frame in the initial attribute information group with the display time corresponding to each key frame to obtain the attribute information group of the animation effect of the key frames in the plurality of image frames.
In some embodiments, the rendering unit is configured to perform associating the attribute information group corresponding to each key frame with the corresponding target image element to obtain an attribute animation list, where the attribute animation list includes attribute information groups corresponding to a plurality of target image elements; rendering the target image element based on the attribute animation list to play the target animation.
In some embodiments, the apparatus further comprises:
a calling unit configured to execute calling a management animation entry class, and acquire the target image element through the management animation entry class;
and the playing unit is configured to execute a calling animation playing function, and play the target animation through the animation playing function.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the animation playing method according to any one of the first aspect and the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the animation playing method according to any one of the first aspect and the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, including a computer program, which when executed by a processor, implements the animation playback method according to any one of the first aspect and the first aspect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating a method of playing an animation according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method of playing an animation according to an example embodiment.
Fig. 3 is a flow chart illustrating a video playback method according to an example embodiment.
FIG. 4 is a schematic flow chart diagram illustrating a method of playing an animation according to an exemplary embodiment.
FIG. 5 is a flowchart illustrating an implementation of a method for playing an animation according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an animation playback device according to an example embodiment.
Fig. 7 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In addition, the information related to the present application is information authorized by the user or sufficiently authorized by each party.
The scheme provided by the embodiment of the disclosure can be applied to animation scenes. For example, in the animation production process, an animation designer designs an animation through video editing software (AE), and then exports a target animation designed by the animation designer into an attribute description file through an AE animation export plug-in (Bodymovin) in the AE software, and then sends the exported attribute description file to a developer, and the developer receives the attribute description file through electronic equipment, and then renders the target animation through the scheme provided by the disclosure, and then plays the rendered target animation, thereby realizing the automatic production of the target animation and the playing of the target animation.
The attribute description file is a JS Object Notation (Json) file (hereinafter referred to as Json file).
FIG. 1 is a flow diagram illustrating a method of rendering an animation, as shown in FIG. 1, including the following steps, according to an exemplary embodiment.
In step S101, the electronic device acquires an attribute description file of a target animation, the attribute description file including attribute information of animation effects in a plurality of image frames of the target animation.
In step S102, the electronic device parses the attribute description file to obtain attribute information of an animation effect of a key frame in the plurality of image frames, where the key frame is an image frame corresponding to the animation effect in the plurality of image frames.
In step S103, the electronic device composes an attribute information group of the animation effect of the key frame in the plurality of image frames based on the attribute information of the animation effect of the key frame in the plurality of image frames, the attribute information group including the attribute information of the animation effect in the key frame and the display time corresponding to each key frame.
In step S104, the electronic device renders the target image element based on the attribute information set to play the target animation.
According to the scheme provided by the embodiment of the disclosure, after the attribute description file of the target animation is obtained, the attribute description file is analyzed to obtain the attribute information of the animation effect of the key frame in the plurality of image frames of the target animation, and then the attribute information of the animation effect of the key frame is based on the attribute information of the animation effect of the key frame to form the attribute information group of the animation effect of the key frame, so that the target image element is rendered based on the attribute information group, and the rendering of the target animation is realized, so that the target animation can be played, a developer does not need to manually write a code, the man-machine interaction efficiency is improved, and the animation production efficiency is further improved.
In some embodiments, parsing the attribute description file to obtain attribute information of animation effect of the key frames in the plurality of image frames includes:
calling a management animation inlet class, and calling a packaging animation component class through the management animation inlet class, wherein the management animation inlet class is used for providing an animation management inlet, and the packaging animation component class is used for packaging the acquired data;
and calling an animation resource acquisition class through the package animation component class, analyzing the attribute description file through the animation resource acquisition class, and acquiring the attribute information of the animation effect of the key frames in the plurality of image frames.
In some embodiments, parsing the attribute description file through the animation resource obtaining class to obtain attribute information of animation effects of key frames in the plurality of image frames includes:
calling an outermost layer data analysis class through the animation resource acquisition class, and acquiring the frame rate of animation effects in the plurality of image frames through the outermost layer data analysis class;
and calling a layer key frame data analysis class, and acquiring the attribute information of the animation effect of the key frames in the plurality of image frames through the layer key frame data analysis class.
In some embodiments, obtaining attribute information of an animation effect of a keyframe in the plurality of image frames through the layer of keyframe data parsing class includes:
calling an attribute conversion data analysis class through the key frame data analysis class of the layer, calling an attribute analysis base class through the attribute conversion data analysis class, and acquiring attribute information of animation effects of key frames in the plurality of image frames from the attribute description file through the attribute analysis base class;
the attribute conversion data analysis class is used for providing a data conversion entry, and the attribute analysis base class is used for providing a method class for analyzing the attribute description file.
In some embodiments, the attribute information of the animation effect in the plurality of image frames is obtained from the attribute description file by the attribute parsing base class, and includes at least one of:
calling a transparency attribute analysis class through the attribute analysis base class, and acquiring transparency attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the transparency attribute analysis class;
calling a displacement attribute analysis class through the attribute analysis base class, and acquiring displacement attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the displacement attribute analysis class;
calling a rotation attribute analysis class through the attribute analysis base class, and acquiring rotation attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the rotation attribute analysis class;
and calling a zooming attribute analysis class through the attribute analysis base class, and acquiring zooming attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the zooming attribute analysis class.
In some embodiments, composing the set of attribute information of the animation effect of the key frame in the plurality of image frames based on the attribute information of the animation effect of the key frame in the plurality of image frames comprises:
for any key frame, the attribute information of the animation effect in the key frame is combined into an initial attribute information group corresponding to the key frame;
and determining an attribute information group of the animation effect of the key frames in the plurality of image frames based on the frame rate of the animation effect in the plurality of image frames and the initial attribute information group corresponding to each key frame.
In some embodiments, determining the attribute information set of the animation effect of the key frames in the plurality of image frames based on the frame rates of the animation effect in the plurality of image frames and the initial attribute information sets corresponding to the respective key frames comprises:
determining display time corresponding to each key frame based on the frame rate of the animation effect in the plurality of image frames;
and replacing the frame index corresponding to each key frame in the initial attribute information group with the display time corresponding to each key frame to obtain the attribute information group of the animation effect of the key frames in the plurality of image frames.
In some embodiments, rendering the target image element based on the property information set to play the target animation includes:
associating the attribute information group corresponding to each key frame with the corresponding target image element to obtain an attribute animation list, wherein the attribute animation list comprises attribute information groups corresponding to a plurality of target image elements;
rendering the target image element based on the attribute animation list to play the target animation.
In some embodiments, after rendering the target image element based on the property information set to play the target animation, the method further comprises:
calling a management animation entry class, and acquiring the target image element through the management animation entry class;
and calling an animation playing function, and playing the target animation through the animation playing function.
Fig. 1 is a basic flow chart of the present disclosure, and the following further describes a scheme provided by the present disclosure based on a specific implementation manner, and fig. 2 is a flow chart of an animation playing method according to an exemplary embodiment, and as shown in fig. 2, the method includes the following steps.
In step S201, the electronic device acquires an attribute description file of a target animation, the attribute description file including attribute information of animation effects in a plurality of image frames of the target animation.
In the process of designing animation by using AE software, the operation is performed based on the layers, that is, when an animation designer designs an animation effect, the animation effect is designed by newly building the layers in the AE software, adding image elements (such as pictures) on the layers, and setting attribute information of the layers to realize the setting of the display effect of the image elements, and so on, so as to set a plurality of layers.
The attribute information includes position information, scaling information, rotation information, and transparency information, where the position information is also a horizontal coordinate and a vertical coordinate of the layer, the scaling information is also a scaling ratio of the layer, the rotation information is also a rotation number and a rotation angle of the layer, and the transparency information is also a transparency of the layer.
The setting operation in the AE is performed based on the layers, so that the attribute information in the attribute description file derived by the body movie plugin in the AE software also corresponds to each layer, that is, the attribute information of the animation effect recorded in the attribute description file is classified by layer, that is, the attribute information of each layer in each image frame is recorded in the attribute description file. For example, for any image frame, taking that any image frame includes two layers, i.e., layer 1 and layer 2, the attribute information of the animation effect in any image frame includes the attribute information of layer 1 and the attribute information of layer 2.
The attribute description file corresponds to a group of animations, or the attribute description file corresponds to a plurality of groups of animations, the number of animation groups corresponding to the attribute description file is determined based on the number of layers constructed by an animation designer in AE software, one layer corresponds to a group of animations, and all groups of animations are distinguished by animation names.
In some embodiments, after the animation designer exports the property description file through the body movie plugin of the AE software, the property description file is stored in the electronic device used by the animation designer, so that the animation designer transmits the property description file to the electronic device used by the developer through the electronic device, and the electronic device used by the developer acquires the property description file. Wherein, the attribute description file is a Json file.
In step S202, the electronic device parses the attribute description file to obtain attribute information of an animation effect of a key frame in the image frames, where the key frame is an image frame corresponding to the animation effect in the image frames.
In some embodiments, the electronic device calls a management animation entry class (animation manager), calls a wrapper animation component class (animation view wrapper) through the management animation entry class, calls an animation resource acquisition class (attribute composition factory) through the wrapper animation component class, and parses the attribute description file through the animation resource acquisition class to acquire attribute information of animation effects of keyframes in the plurality of image frames.
The management animation entry class is associated with the packaging animation assembly class, and the packaging animation assembly class is associated with the animation resource acquisition class, so that the electronic equipment can call the packaging animation assembly class through the management animation entry class and call the animation resource acquisition class through the packaging animation assembly class, and accordingly the attribute information is acquired.
The management animation inlet class is used as a core logic inlet in the attribute information acquisition process and is used for providing an animation management inlet. The package animation component class is used as a package class of the animation control and used for packaging the acquired data so as to package some methods of the dynamic component or some control flows of the animation effect. The Animation resource acquisition class (or property component factory class) is used for providing an Animation Task (Animation Task) created through a local file or a network and caching logic, so that the electronic equipment can carry out Animation through the Animation resource acquisition class.
The method comprises the steps of calling a management entry class to call a packaging animation assembly class associated with the management entry class, and calling an animation resource acquisition class associated with the packaging animation assembly class through the packaging animation assembly class, so that the attribute information of the animation effect of key frames in a plurality of image frames is acquired through the animation resource acquisition class, the acquisition of the attribute information is realized, the manual operation of developers is not needed, the human-computer interaction efficiency is improved, and the acquisition efficiency of the attribute information is improved.
For the above-mentioned process that the electronic device parses the attribute description file through the animation resource acquisition class, the animation resource acquisition class includes two subclasses, namely, an outermost layer data parsing class (attribute composition parser class) and a layer key frame data parsing class (attribute layer parser class), so that the electronic device can call the outermost layer data parsing class and the layer key frame data parsing class through the animation resource acquisition class to achieve the acquisition of the attribute information.
In some embodiments, the electronic device calls an outermost data parsing class through the animation resource obtaining class, and obtains a frame rate of an animation effect in the plurality of image frames through the outermost data parsing class; and calling a layer key frame data analysis class, and acquiring the attribute information of the animation effect of the key frames in the plurality of image frames through the layer key frame data analysis class.
In the process of acquiring the attribute information through the animation resource acquisition class, the outermost layer data analysis class serving as the subclass of the animation resource acquisition class is called, so that the frame rate of animation effects in a plurality of image frames is acquired through the outermost layer data analysis class, and then the layer key frame data analysis class serving as the subclass of the animation resource acquisition class is called, so that the attribute information of the animation effects of the key frames in the plurality of image frames is acquired through the layer key frame data analysis class, manual operation of developers is not needed, the human-computer interaction efficiency is improved, and the acquisition efficiency of the frame rate and the attribute information is improved.
For the layer of key frame data parsing class, the layer of key frame data parsing class includes a subclass of an attribute transformation data parsing class (attributetransformserver), and the attribute transformation data parsing class includes a subclass of an attribute parsing base class (attributvalueprarser), where the attribute transformation data parsing class is used to provide a data transformation entry, so that the electronic device can transform the operation information in the attribute description file into attribute information that can be read by the electronic device through the attribute transformation data parsing class, and the attribute parsing base class provides a method class for parsing the attribute description file, so that the electronic device can parse the attribute information in the attribute description file through the attribute parsing base class to obtain attribute information of a corresponding type.
That is, the process of obtaining the attribute information of the animation effect of the keyframes in the plurality of image frames through the parsing class of the keyframe data of the layer may be implemented as follows: the electronic equipment calls the attribute conversion data analysis class through the key frame data analysis class of the layer, calls the attribute analysis base class through the attribute conversion data analysis class, and obtains the attribute information of the animation effect of the key frames in the plurality of image frames from the attribute description file through the attribute analysis base class.
In the process of acquiring the attribute information through the layer key frame data analysis class, the attribute conversion data analysis class serving as the subclass of the layer key frame data analysis class is called, so that the attribute analysis base class serving as the subclass of the attribute conversion data analysis class is called through the attribute conversion data analysis class, and then the attribute information of the animation effect of the key frames in the plurality of image frames is acquired through the attribute analysis base class.
For example, the attribute resolution base class includes a transparency attribute resolution class (alphavalueprarser) for resolving transparency attribute information, a displacement attribute resolution class (transitionvalueprarser) for resolving displacement attribute information, a rotation attribute resolution class (rotarvalueeprarser) for resolving rotation attribute information, and a scalevalueprarser (scalevalueprarser) for resolving scaledattribute information, so that when the electronic device obtains the attribute information through the attribute resolution base class, the electronic device includes at least one of the following implementation manners:
the electronic equipment calls a transparency attribute analysis class through the attribute analysis base class, and obtains transparency attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the transparency attribute analysis class.
The electronic equipment calls a displacement attribute analysis class through the attribute analysis base class, and obtains displacement attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the displacement attribute analysis class.
The electronic equipment calls a rotation attribute analysis class through the attribute analysis base class, and obtains rotation attribute information of animation effects in the image frames from the attribute description file subjected to format conversion through the rotation attribute analysis class.
And the electronic equipment calls a zooming attribute analysis class through the attribute analysis base class, and obtains zooming attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the zooming attribute analysis class.
By providing the attribute analysis classes for acquiring the corresponding attribute information, the corresponding attribute information can be acquired through the attribute analysis classes, the attribute description file can be analyzed, the manual operation of developers is not needed, the human-computer interaction efficiency is improved, and the acquisition efficiency of the attribute information is improved.
In the process of analyzing the attribute description file to obtain the attribute information, for the image frames which do not relate to the animation effect, the electronic device filters the attribute information corresponding to the image frames which do not relate to the animation effect, so that the finally obtained attribute information is the attribute information corresponding to the key frames which relate to the animation effect.
For the management animation entry class, the management animation entry class can provide two animation playing modes of playing default animation and playing specified animation, and for the animation playing mode of playing the default animation, if the acquired attribute information does not have attribute information corresponding to a certain image element, the electronic equipment can render the image element into a default animation effect through the management animation entry class so as to play the default animation; and for the image element with the corresponding attribute information, the electronic equipment can render the image element into the specified animation corresponding to the attribute information through the management animation inlet so as to play the specified animation, thereby realizing the rendering of the whole target animation and playing the target animation.
Therefore, the attribute information corresponding to the image frames which do not relate to the animation effect is filtered, the animation playing process is not affected, the processing pressure of the electronic equipment can be reduced, and the animation rendering speed and the animation playing speed are improved.
In step S203, for any key frame, the electronic device composes the attribute information of the animation effect in the any key frame into an initial attribute information group corresponding to the any key frame.
In some embodiments, after acquiring the attribute information of the animation effect of the key frame in the attribute description file, the electronic device forms an attribute information group with the attribute information corresponding to the same key frame, and the attribute information group is used as an initial attribute information group corresponding to each key frame.
The attribute information corresponding to the same key frame is combined into an initial attribute information group corresponding to each key frame, so that the electronic equipment can determine which attribute information corresponds to the same key frame, and a complete animation effect can be formed during subsequent animation playing.
In the initial attribute information group, the frame number is used as an index, that is, in the initial attribute information group, each attribute information corresponds to the frame number of the corresponding key frame.
In step S204, the electronic device determines an attribute information set of animation effects of key frames in the plurality of image frames based on the frame rates of animation effects in the plurality of image frames and the initial attribute information sets corresponding to the respective key frames.
In some embodiments, the electronic device determines a display time corresponding to each key frame based on a frame rate of an animation effect in the plurality of image frames; and replacing the frame index corresponding to each key frame in the initial attribute information group with the display time corresponding to each key frame to obtain the attribute information group of the animation effect of the key frames in the plurality of image frames.
For example, if the frame rate of the animation effect in the plurality of image frames is 24 hertz (Hz), that is, 24 frames are played per second, the display time of the image frame with the frame number of 3 (that is, the 3 rd image frame) in the plurality of image frames is 125 milliseconds (ms).
The frame indexes of all key frames in the initial attribute information group are replaced by the display time of all key frames determined based on the frame rate, so that the attribute information group which can be identified by the electronic equipment is obtained, the electronic equipment can obtain the attribute information from the attribute information group, and then the target animation is rendered based on the attribute information, the manual operation of developers is not needed, the rendering of the target animation can be realized, the human-computer interaction efficiency is improved, and the animation manufacturing efficiency is improved.
In the above steps S203 to S204, after the initial attribute information group corresponding to any one of the key frames is obtained, the attribute information group is determined by combining the frame rates of animation effects in the plurality of image frames, so that the finally determined attribute information group can be read by the electronic device, and the electronic device can obtain the attribute information from the attribute information group by itself to render the target animation without manual operation by a developer, thereby improving the human-computer interaction efficiency and further improving the animation production efficiency.
In step S205, the electronic device associates the attribute information group corresponding to each key frame with the corresponding target image element, so as to obtain an attribute animation list, where the attribute animation list includes attribute information groups corresponding to a plurality of target image elements.
In some embodiments, the electronic device adds a corresponding target image element, that is, an image element on which an animation effect is to be applied, to the attribute information group corresponding to each key frame, so as to obtain an attribute animation list including the target image element and the corresponding attribute information group.
And associating the attribute information groups to corresponding target image elements to obtain an attribute animation list comprising the target image elements and attribute information corresponding to the target image elements, so that the target image elements are directly rendered based on the attribute animation list to realize the rendering of the target animation.
In some embodiments, after obtaining the attribute animation list, the electronic device returns the obtained attribute animation list to the attribute analysis base class; returning the obtained attribute animation list to an attribute conversion data analysis class through the attribute analysis base class; returning the obtained attribute animation list to a layer key frame data analysis class through the attribute conversion data analysis class; returning the obtained attribute animation list to the outermost layer data analysis class through the layer key frame data analysis class; returning the obtained attribute animation list to the animation resource obtaining class through the outmost layer data analysis class; returning the obtained attribute animation list to the package animation component class through the animation resource obtaining class; and returning the acquired attribute animation list to the management animation entry class by packaging the animation component class, so that the attribute description file is analyzed to acquire the attribute animation list.
The attribute animation list can be regarded as a script pendant, and the attribute animation list is associated with the target image element, namely, the script pendant is mounted on the target image element, so that the target image element is rendered into a corresponding animation effect. In other embodiments, a target image element is associated with a plurality of attribute animation lists, that is, a plurality of script hangers are hung on one target image element, so that the target image element can simultaneously realize various animation effects, the display effect of the target image element is improved, and the diversity of animation playing processes is also improved.
In step S206, the electronic device renders the target image element based on the attribute animation list to play the target animation.
In some embodiments, when the target animation is played, the electronic device calls a management animation entry class, and acquires the target image element through the management animation entry class; and calling an animation playing function, and playing the target animation through the animation playing function.
When the target image element is obtained through the management animation entry class, the electronic device obtains the target image element through the instance obtaining function getInstance (), and then plays the target animation through the animation playing function.
The animation play function is playAnimation (button, "data. josn", "btn _ scale"). Wherein, the button represents an image element (such as a control) of the animation to be played, and the image element is any customized rectangular area (View); josn represents the acquired attribute description file; btn _ scale represents the retrieved attribute animation list, i.e., the animation to be played.
By calling the management animation inlet and further by means of the method function in the management animation inlet, the target image elements are obtained, the target animation is played, manual operation of developers is not needed, animation playing is achieved, human-computer interaction efficiency is improved, and accordingly animation playing efficiency is improved.
Referring to fig. 3 for the process from the step S201 to the step S206, fig. 3 is a flowchart of a video playing method shown according to an exemplary embodiment, after the electronic device acquires the attribute description file through the step S201, the electronic device hierarchically parses the attribute information through the step S202 to acquire a plurality of types of attribute information, further generates an attribute information group of a key frame through the step S203, modifies the frame number in the attribute information group into a display time in milliseconds according to the frame rate through the step S204, further adds a binding View into the attribute information group through the step S205 to generate an attribute animation list, returns the attribute animation list to the management animation entry class, and plays the target animation through the management animation entry class.
The scheme provided by the embodiment of the disclosure can be applied to various operating systems, platforms or frameworks, such as an Android operating system, an apple mobile operating system (iOS), a world wide Web (Web), and a cross-platform mobile application development framework (read Native, RN), see fig. 4, where fig. 4 is a schematic diagram of a principle flow of an animation playing method according to an exemplary embodiment, and after an animation designer derives an attribute description file in a Json format through a Bodymovin plug-in of AE software, an electronic device automatically parses the attribute description file through the scheme provided by the disclosure to generate an attribute animation list, and further associates the attribute animation list and a target image element, so that playing of a target animation can be realized in the Android, iOS, Web, and RN.
In some embodiments, after designing an animation by AE software, an animation designer exports the animation designed by the animation designer to a video file in a format of motion Picture Experts compression standard Audio Layer IV (MP 4), and then sends the video file to a developer, so that the developer plays the video file after playing a target animation by using the scheme provided by the present disclosure, so as to review a target animation implemented by the scheme provided by the present disclosure, thereby bringing the target animation into online after the animation effects of the target animation in the video file are consistent.
Referring to fig. 5 for the contents involved in the above steps S201 to S206, fig. 5 is a flowchart illustrating an implementation of an animation playing method according to an exemplary embodiment, after an animation designer refers to an animation, an attribute description file is exported through a Bodymovin plug-in, an electronic device directly plays a target animation through a scheme provided by the present disclosure, a developer performs design check based on the played target animation and an MP4 file provided by the animation designer, that is, checks whether an effect of the target animation is consistent with an animation effect in an MP4 file provided by the animation designer, and then the target animation is online when the effect is consistent.
According to the scheme provided by the embodiment of the disclosure, after the attribute description file of the target animation is obtained, the attribute description file is analyzed to obtain the attribute information of the animation effect of the key frame in the plurality of image frames of the target animation, and then the attribute information of the animation effect of the key frame is based on the attribute information of the animation effect of the key frame to form the attribute information group of the animation effect of the key frame, so that the target image element is rendered based on the attribute information group, and the rendering of the target animation is realized, so that the target animation can be played, a developer does not need to manually write a code, the man-machine interaction efficiency is improved, and the animation production efficiency is further improved.
In addition, after the animation design is completed through AE software, an animation designer can automatically derive an attribute description file of the designed target animation through a Bodymovin plug-in, and then through the scheme provided by the embodiment of the disclosure, attribute information is automatically analyzed and packaged into a corresponding attribute animation list, so that the target animation is played after the attribute animation list is associated with the target image element.
Fig. 6 is a block diagram illustrating an animation playback device according to an example embodiment. Referring to fig. 6, the apparatus includes:
an acquisition unit 601 configured to perform acquisition of an attribute description file of a target animation, the attribute description file including attribute information of animation effects in a plurality of image frames of the target animation;
an analyzing unit 602 configured to analyze the attribute description file to obtain attribute information of an animation effect of a key frame in the plurality of image frames, where the key frame is an image frame corresponding to the animation effect in the plurality of image frames;
a composing unit 603 configured to perform composing of an attribute information group of the animation effect of the key frame among the plurality of image frames based on attribute information of the animation effect of the key frame among the plurality of image frames, the attribute information group including attribute information of the animation effect in the key frame and display time corresponding to each key frame;
a rendering unit 604 configured to perform rendering of the target image element based on the attribute information set to play the target animation.
According to the device provided by the embodiment of the disclosure, after the attribute description file of the target animation is acquired, the attribute description file is analyzed to acquire the attribute information of the animation effect of the key frame in the plurality of image frames of the target animation, and then the attribute information of the animation effect of the key frame is based on the attribute information of the animation effect of the key frame to form the attribute information group of the animation effect of the key frame, so that the target image element is rendered based on the attribute information group, and the rendering of the target animation is realized, so that the target animation can be played, a developer does not need to manually write a code, the man-machine interaction efficiency is improved, and the animation production efficiency is further improved.
In some embodiments, the parsing unit 602 is configured to execute invoking a management animation entry class, and invoke a package animation component class through the management animation entry class, where the management animation entry class is used to provide an animation management entry, and the package animation component class is used to package the acquired data;
the parsing unit 602 is further configured to execute calling an animation resource obtaining class through the package animation component class, and parse the attribute description file through the animation resource obtaining class to obtain attribute information of animation effects of key frames in the plurality of image frames.
In some embodiments, the parsing unit 602 includes a first acquisition subunit and a second acquisition subunit;
the first obtaining subunit is configured to execute the animation resource obtaining class, call an outermost layer data parsing class, and obtain, through the outermost layer data parsing class, a frame rate of an animation effect in the plurality of image frames;
the second obtaining subunit is configured to execute a calling layer key frame data parsing class, and obtain attribute information of an animation effect of a key frame in the plurality of image frames through the layer key frame data parsing class.
In some embodiments, the second fetch subunit includes a calling submodule and a fetch submodule;
the calling submodule is configured to execute the analysis class of the key frame data of the layer, call the analysis class of the attribute conversion data, and call the attribute analysis base class by the analysis class of the attribute conversion data;
the obtaining sub-module is configured to execute obtaining attribute information of animation effect of the key frames in the plurality of image frames from the attribute description file through the attribute analysis base class;
the attribute conversion data analysis class is used for providing a data conversion entry, and the attribute analysis base class is used for providing a method class for analyzing the attribute description file.
In some embodiments, the acquisition submodule is configured to perform at least one of:
calling a transparency attribute analysis class through the attribute analysis base class, and acquiring transparency attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the transparency attribute analysis class;
calling a displacement attribute analysis class through the attribute analysis base class, and acquiring displacement attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the displacement attribute analysis class;
calling a rotation attribute analysis class through the attribute analysis base class, and acquiring rotation attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the rotation attribute analysis class;
and calling a zooming attribute analysis class through the attribute analysis base class, and acquiring zooming attribute information of animation effects in the plurality of image frames from the attribute description file subjected to format conversion through the zooming attribute analysis class.
In some embodiments, the composition unit 603 includes a composition subunit and a determination subunit;
the composition subunit is configured to perform, for any key frame, composition of attribute information of an animation effect in the any key frame into an initial attribute information group corresponding to the any key frame;
the determining subunit is configured to perform determining an attribute information set of animation effects of key frames in the plurality of image frames based on the frame rates of the animation effects in the plurality of image frames and the initial attribute information sets corresponding to the respective key frames.
In some embodiments, the determining subunit is configured to perform determining, based on the frame rates of the animation effects in the plurality of image frames, display times corresponding to the respective keyframes; and replacing the frame index corresponding to each key frame in the initial attribute information group with the display time corresponding to each key frame to obtain the attribute information group of the animation effect of the key frames in the plurality of image frames.
In some embodiments, the rendering unit is configured to perform associating the attribute information group corresponding to each key frame with the corresponding target image element to obtain an attribute animation list, where the attribute animation list includes attribute information groups corresponding to a plurality of target image elements; rendering the target image element based on the attribute animation list to play the target animation.
In some embodiments, the apparatus further comprises:
a calling unit configured to execute calling a management animation entry class, and acquire the target image element through the management animation entry class;
and the playing unit is configured to execute a calling animation playing function, and play the target animation through the animation playing function.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. The electronic device 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The electronic device 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, the electronic device 700 includes: one or more processors 701 and one or more memories 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 702 is used to store at least one program code for execution by the processor 701 to implement the animation playback method provided by the method embodiments of the present disclosure.
In some embodiments, the electronic device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other electronic devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the electronic device 700; in other embodiments, the number of the display screens 705 may be at least two, and the at least two display screens are respectively disposed on different surfaces of the electronic device 700 or are in a folding design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of an electronic apparatus, and a rear camera is disposed on a rear surface of the electronic apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is operable to locate a current geographic Location of the electronic device 700 to implement a navigation or LBS (Location Based Service). The Positioning component 708 can be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 709 is used to supply power to various components in the electronic device 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power source 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the electronic device 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the user with respect to the electronic device 700. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of electronic device 700 and/or underlying display screen 705. When the pressure sensor 713 is disposed on a side frame of the electronic device 700, a user holding signal of the electronic device 700 may be detected, and the processor 701 may perform left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the electronic device 700. When a physical button or vendor Logo is provided on the electronic device 700, the fingerprint sensor 714 may be integrated with the physical button or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 705 is increased; when the ambient light intensity is low, the display brightness of the display screen 705 is adjusted down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on the front panel of the electronic device 700. The proximity sensor 716 is used to capture the distance between the user and the front of the electronic device 700. In one embodiment, the processor 701 controls the display screen 705 to switch from the bright screen state to the dark screen state when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 is gradually decreased; when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 is gradually increased, the processor 701 controls the display screen 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 does not constitute a limitation of the electronic device 700 and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 702 comprising instructions, executable by the processor 701 of the electronic device 700 to perform the animation playback method described above is also provided. Alternatively, the computer-readable storage medium is a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, or the like.
In an exemplary embodiment, there is also provided a computer program product comprising computer program instructions, which are executed by a processor of an electronic device, to implement the animation playback method described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.