Picture display method, device, terminal and storage medium

文档序号:8486 发布日期:2021-09-17 浏览:44次 中文

1. A method for displaying a picture, the method being performed by a first terminal, the method comprising:

displaying a target interface, wherein the target interface comprises a first picture display area and a second picture display area;

displaying a first picture part in a first target picture in the first picture display area; the first target picture is a shooting picture at the first terminal side, and the mapping position of a second picture part in the first target picture in the target interface is in the second picture display area;

displaying a partial picture in a second target picture in the second picture display area; the second target picture is a shooting picture at a second terminal side;

in response to the second picture part having a first target object, displaying at least one of the first target object in the second picture part and an avatar corresponding to the first target object in the second picture part in an overlapping manner in the second picture display area; the first target object is a target object in the first target screen.

2. The method according to claim 1, wherein said displaying at least one of the first target object in the second screen portion and an avatar corresponding to the first target object in the second screen portion in the second screen display area in an overlay manner in response to the first target object being present in the second screen portion comprises:

carrying out image recognition on the second picture part, and dividing the second picture part into a background part and a target outline part;

determining an identification object corresponding to the target contour part based on the target contour part;

in response to determining that the recognition object corresponding to the target contour portion is the first target object, at least one of the first target object in the second picture portion and the avatar corresponding to the first target object is displayed in the second picture display area in an overlaid manner.

3. The method of claim 2, wherein determining the identified object corresponding to the target contour portion based on the target contour portion comprises:

performing edge enhancement processing on the target contour part to obtain a processed target contour part;

and determining the recognition object based on the processed target contour part.

4. The method according to claim 2, wherein said displaying at least one of the first target object in the second screen portion and the avatar corresponding to the first target object in the second screen display area in an overlaid manner in response to determining that the identified object corresponding to the target outline portion is the first target object comprises:

responding to the identification object being the first target object, and acquiring the virtual image corresponding to the first target object based on a pre-stored corresponding relation or a specified algorithm;

and displaying the virtual image corresponding to the first target object in the second picture display area in an overlapping manner.

5. The method according to claim 1, wherein said displaying at least one of the first target object in the second screen portion and an avatar corresponding to the first target object in the second screen portion in an overlay manner in the second screen display area in response to the first target object being present in the second screen portion comprises:

in response to the second screen portion having the first target object therein and the target interface including at least two of the second screen display areas, determining a target screen display area based on the mapped position of the second screen portion in the target interface; the target picture presentation area is at least one of the at least two second picture presentation areas;

and displaying at least one of the first target object in the second picture part and the avatar corresponding to the first target object in the target picture display area in an overlapping manner.

6. The method of any of claims 1 to 5, further comprising:

in response to that the first target object contains a target part and a part of the second target picture comprises a target area, showing a special effect animation on the target interface; the target region is a region corresponding to a designated portion obtained by image recognition.

7. The method of claim 6, wherein the displaying a special effect animation on the target interface in response to the first target object including a target portion thereon and the second target object including a target area on a portion of the screen comprises:

performing image recognition on a part of the second target picture displayed in the second picture display area in response to the first target object including the target part;

acquiring the target area on a partial picture in the second target picture;

and displaying the special effect animation in response to the target part contained on the first target object and the target area in the second target picture meeting a specified position relation.

8. The method according to claim 1, wherein prior to said displaying at least one of said first target object in said second picture portion and an avatar corresponding to said first target object in said target picture display area in an overlay manner, further comprising:

transmitting the first picture portion in the first target picture, the avatar of the first target object, and the mapped position of the avatar of the first target object in the second picture-showing area to a server, so that the second terminal acquires the first picture portion in the first target picture, the avatar of the first target object, and the mapped position of the avatar of the first target object in the second picture-showing area from the server and shows the first picture portion, the second picture portion, and the avatar of the first target object.

9. The method of claim 1, further comprising:

displaying a first screen part and an avatar of a second target object in the first screen display area in response to receiving the second screen part, the avatar of the second target object, and a mapped position of the avatar of the second target object in the first screen display area transmitted by a server, the second screen part being displayed in the second screen display area; the second target object is the target object in the second target screen.

10. A picture display apparatus, wherein the apparatus is used in a first terminal, the apparatus comprising:

the interface display module is used for displaying a target interface, and the target interface comprises a first picture display area and a second picture display area;

the first picture display module is used for displaying a first picture part in a first target picture in the first picture display area; the first target picture is a shooting picture at the first terminal side, and the mapping position of a second picture part in the first target picture in the target interface is in the second picture display area;

the second picture display module is used for displaying a part of pictures in a second target picture in the second picture display area; the second target picture is a shooting picture at a second terminal side;

a first character overlapping module, configured to, in response to a first target object existing in the second screen portion, overlap and display at least one of the first target object in the second screen portion and an avatar corresponding to the first target object in the second screen portion in the second screen display area; the first target object is a target object in the first target screen.

11. A computer device comprising a processor and a memory, said memory storing at least one instruction, at least one program, a set of codes, or a set of instructions, said at least one instruction, said at least one program, said set of codes, or said set of instructions being loaded and executed by said processor to implement the method of screen presentation according to any one of claims 1 to 9.

12. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the screen presentation method according to any one of claims 1 to 9.

Background

At present, with the technology in the field of live broadcasting becoming mature, in the process of live broadcasting, in order to increase the interaction between anchor broadcasters, the video and wheat connecting function can be used.

In the related technology, a video connecting function is initiated from one anchor to other live broadcasts, after the other anchors respond to the initiated video connecting request, live broadcast interfaces for video connecting are displayed on a terminal of the anchor participating in the video connecting and a terminal of a user side watching a live broadcast room participating in the video connecting, and spliced pictures of independent live broadcast pictures respectively corresponding to each anchor are displayed on the live broadcast interfaces.

However, when the video is connected with the wheat by the method, the pictures displayed on the terminal are spliced pictures when each anchor broadcast is independently carried out, which results in poor interaction effect between the anchor broadcasts expected by the video connection function.

Disclosure of Invention

The embodiment of the application provides a picture display method, a picture display device, a terminal and a storage medium, which can improve the interactive effect of pictures, and the technical scheme is as follows:

in one aspect, a method for displaying a picture is provided, where the method is performed by a first terminal, and the method includes:

displaying a target interface, wherein the target interface comprises a first picture display area and a second picture display area;

displaying a first picture part in a first target picture in the first picture display area; the first target picture is a shooting picture at the first terminal side, and the mapping position of a second picture part in the first target picture in the target interface is in the second picture display area;

displaying a partial picture in a second target picture in the second picture display area; the second target picture is a shooting picture at a second terminal side;

in response to the second picture part having a first target object, displaying at least one of the first target object in the second picture part and an avatar corresponding to the first target object in the second picture part in an overlapping manner in the second picture display area; the first target object is a target object in the first target screen.

In one possible implementation manner, the displaying at least one of the first target object in the second screen portion and the avatar corresponding to the first target object in the second screen portion in the second screen display area in an overlapping manner in response to the first target object existing in the second screen portion includes:

performing image recognition on the second picture portion,

dividing the second picture portion into a background portion and a target contour portion;

determining an identification object corresponding to the target contour part based on the target contour part;

in response to determining that the recognition object corresponding to the target contour portion is the first target object, at least one of the first target object in the second picture portion and the avatar corresponding to the first target object is displayed in the second picture display area in an overlaid manner.

In a possible implementation manner, the determining, based on the target contour portion, an identification object corresponding to the target contour portion includes:

performing edge enhancement processing on the target contour part to obtain a processed target contour part;

and determining the recognition object based on the processed target contour part.

In one possible implementation, the displaying at least one of the first target object in the second screen portion and the avatar corresponding to the first target object in the second screen display area in an overlapping manner in response to determining that the identified object corresponding to the target outline portion is the first target object, includes:

responding to the identification object being the first target object, and acquiring the virtual image corresponding to the first target object based on a pre-stored corresponding relation or a specified algorithm;

and displaying the virtual image corresponding to the first target object in the second picture display area in an overlapping manner.

In one possible implementation manner, the displaying, in response to the existence of the first target object in the second screen portion, at least one of the first target object in the second screen portion and an avatar corresponding to the first target object in the second screen portion in an overlapping manner in the second screen display area includes:

in response to the second screen portion having the first target object therein and the target interface including at least two of the second screen display areas, determining a target screen display area based on the mapped position of the second screen portion in the target interface; the target picture presentation area is at least one of the at least two second picture presentation areas;

and displaying at least one of the first target object in the second picture part and the avatar corresponding to the first target object in the target picture display area in an overlapping manner.

In one possible implementation, the method further includes:

in response to that the first target object contains a target part and a part of the second target picture comprises a target area, showing a special effect animation on the target interface; the target region is a region corresponding to a designated portion obtained by image recognition.

In one possible implementation manner, the displaying a special effect animation on the target interface in response to that the first target object includes a target portion and a partial screen of the second target screen includes a target area includes:

performing image recognition on a part of the second target picture displayed in the second picture display area in response to the first target object including the target part;

acquiring the target area on a partial picture in the second target picture;

and displaying the special effect animation in response to the target part contained on the first target object and the target area in the second target picture meeting a specified position relation.

In a possible implementation manner, before displaying at least one of the first target object in the second picture portion and an avatar corresponding to the first target object in the target picture display area in an overlapping manner, the method further includes:

transmitting the first picture portion in the first target picture, the avatar of the first target object, and the mapped position of the avatar of the first target object in the second picture-showing area to a server, so that the second terminal acquires the first picture portion in the first target picture, the avatar of the first target object, and the mapped position of the avatar of the first target object in the second picture-showing area from the server and shows the first picture portion, the second picture portion, and the avatar of the first target object.

In one possible implementation, the method further includes:

displaying a first screen part and an avatar of a second target object in the first screen display area in response to receiving the second screen part, the avatar of the second target object, and a mapped position of the avatar of the second target object in the first screen display area transmitted by a server, the second screen part being displayed in the second screen display area; the second target object is the target object in the second target screen.

In another aspect, there is provided a picture displaying apparatus, the apparatus being used in a first terminal, the apparatus comprising:

the interface display module is used for displaying a target interface, and the target interface comprises a first picture display area and a second picture display area;

the first picture display module is used for displaying a first picture part in a first target picture in the first picture display area; the first live-air picture is a shooting picture at the first terminal side, and the mapping position of a second picture part in the first target picture in the target interface is in the second picture display area;

the second picture display module is used for displaying a part of pictures in a second target picture in the second picture display area; the second target picture is a shooting picture at a second terminal side;

a first character overlapping module, configured to, in response to a first target object existing in the second screen portion, overlap and display at least one of the first target object in the second screen portion and an avatar corresponding to the first target object in the second screen portion in the second screen display area; the first target object is a target object in the first target screen.

In one possible implementation, the first avatar superimposition module includes:

the area division submodule is used for carrying out image recognition on the second picture part and dividing the second picture part into a background part and a target outline part;

the object identification submodule is used for determining an identification object corresponding to the target contour part on the basis of the target contour part;

and an avatar overlaying sub-module, configured to, in response to determining that the identified object corresponding to the target contour portion is the first target object, overlay and display at least one of the first target object in the second picture portion and the avatar corresponding to the first target object in the second picture display area.

In one possible implementation manner, the object recognition sub-module includes:

the edge enhancement unit is used for carrying out edge enhancement processing on the target contour part to obtain the processed target contour part;

an object determination unit for determining the recognition object based on the processed target contour portion.

In one possible implementation, the character superimposition submodule includes:

a first character acquisition unit configured to acquire the avatar corresponding to the first target object based on a pre-stored correspondence or a prescribed algorithm in response to the recognition object being the first target object;

and the first image overlapping unit is used for overlapping and displaying the virtual image corresponding to the first target object in the second picture display area.

In one possible implementation, the first avatar superimposition module includes:

a target area determination submodule, configured to determine, in response to that the first target object exists in the second picture portion and the target interface includes at least two second picture display areas, a target picture display area based on the mapping position of the second picture portion in the target interface; the target picture presentation area is at least one of the at least two second picture presentation areas;

and the superposition submodule is used for superposing and displaying at least one of the first target object in the second picture part and the virtual image corresponding to the first target object in the target picture display area.

In one possible implementation, the apparatus further includes:

the animation display module is used for responding to the situation that the first target object comprises a target part and part of the second target picture comprises a target area, and displaying a special effect animation on the target interface; the target region is a region corresponding to a designated portion obtained by image recognition.

In one possible implementation, the animation display module includes:

the recognition submodule is used for responding to the first target object containing the target part and carrying out image recognition on partial pictures in the second target picture displayed in the second picture display area;

a target area obtaining sub-module, configured to obtain the target area on a partial picture in the second target picture;

and the animation display submodule is used for displaying the special effect animation in response to that the target position contained in the first target object and the target area in the second target picture meet the specified position relation.

In one possible implementation, the apparatus further includes:

a data transmitting module for transmitting the first picture portion in the first target picture, the avatar of the first target object and the mapped position of the avatar of the first target object in the second picture display area to a server before at least one of the first target object in the second picture portion and the avatar corresponding to the first target object is displayed in an overlapping manner in the target picture display area, so that the second terminal acquires the first picture portion in the first target picture, the avatar of the first target object, and the mapped position of the avatar of the first target object in the second picture-presentation area from the server, and presenting the first picture portion, the second picture portion and an avatar of the first target object.

In one possible implementation, the apparatus further includes:

a data receiving module for displaying a first picture portion and an avatar of a second target object in the first picture display area and displaying the second picture portion in the second picture display area in response to receiving the second picture portion, the avatar of the second target object and a mapped position of the avatar of the second target object in the first picture display area transmitted by a server; the second target object is the target object in the second target screen.

In another aspect, a computer device is provided, which includes a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the above-mentioned screen presentation method.

In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the above-mentioned screen presentation method.

In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the picture presentation method provided in the above-mentioned various optional implementation modes.

The technical scheme provided by the application can comprise the following beneficial effects:

the method comprises the steps of obtaining a second picture part in a first target picture, if a first target object exists in the second picture part, displaying at least one of the first target object and a corresponding virtual image on a part of a picture of a second target picture displayed in a second picture display area in an overlapping mode, wherein the second picture part is in the first target picture and the mapping position is on the second picture display area, so that the interaction mode between the first target picture and the second target picture is expanded by directly or indirectly displaying the first target object in the second picture part on the mapping position of the second picture display area, and when the method is applied to the field of live broadcasting, the interaction effect between main broadcasts in the live broadcasting process is improved.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.

FIG. 1 illustrates a flow chart of a screen presentation method shown in an exemplary embodiment of the present application;

FIG. 2 illustrates a flow chart of a screen presentation method shown in an exemplary embodiment of the present application;

FIG. 3 is a diagram illustrating a first target screen and a second target screen according to the embodiment shown in FIG. 2;

fig. 4 is a diagram of a second picture portion division according to the embodiment shown in fig. 2;

FIG. 5 is a schematic diagram of a screen interaction of an overlaid avatar according to the embodiment shown in FIG. 2;

FIG. 6 is a block diagram of a screen presentation apparatus according to an exemplary embodiment of the present application;

FIG. 7 is a block diagram illustrating the structure of a computer device in accordance with an exemplary embodiment;

fig. 8 shows a block diagram of a computer device according to an exemplary embodiment of the present application.

Detailed Description

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.

It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

The picture display method provided by the application can be applied to scenes in which at least two users perform video picture interaction.

For example, the picture display method can be applied to a scene that at least two anchor broadcasters of a live broadcast platform carry out video connection with wheat, and can also be applied to a scene that at least two users carry out video call. When the picture display method is applied to at least two anchor broadcasts of a live broadcast platform to carry out a scene of video connection with wheat, the live broadcast picture display method can be realized by a live broadcast system corresponding to the live broadcast picture display method, and the live broadcast system comprises the following steps: the system comprises a first terminal, a server, at least one second terminal and at least one third terminal. The first terminal may be an initiator of the microphone connecting function and the second terminal may be a receiver of the microphone connecting function.

The first terminal is provided with and runs an application program with a live video microphone connecting function, a client corresponding to the application program is a first client, and the first terminal is a terminal used by a first user. The first user may be a first anchor that is live; the second terminal is also installed and operated with the same application program as the first terminal, the client corresponding to the application program is a second client, the second terminal is a terminal used by a second user, and the second user can be a second main broadcast which is also in live broadcasting. The third terminal is also provided with and runs the same application program as the application program on the first terminal, the client corresponding to the application program is a third client, the third terminal is a terminal used by a third user, and the third user can be a viewer watching the live broadcast of the first main broadcast or a viewer watching the live broadcast of the second main broadcast.

Optionally, the applications installed on the first terminal, the second terminal and the third terminal are the same, or the applications installed on the three terminals are the same type of application on different operating system platforms (android or IOS). The first terminal may generally refer to one of the plurality of terminals, the second terminal may generally refer to another of the plurality of terminals, and this embodiment is only exemplified by the first terminal, the second terminal, and the third terminal. The device types of the first terminal, the second terminal and the third terminal are the same or different, and the device types include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer iv, motion Picture Experts compression standard Audio Layer 4) player, a laptop portable computer, and a desktop computer.

The server can be a live broadcast server, and the first terminal, the second terminal and the third terminal can be connected through a wired or wireless network.

The live server may be in data communication with a content distribution network for streaming media. The live broadcast server may be a server, or the live broadcast server may be a server cluster composed of a plurality of servers, or the live broadcast server may include one or more virtualization platforms, or the live broadcast server may also be a cloud computing service center.

Fig. 1 shows a flowchart of a screen displaying method according to an exemplary embodiment of the present application, where the screen displaying method may be executed by a computer device, where the computer device may be a first terminal in the live broadcast system, and as shown in fig. 1, the screen displaying method may include the following steps:

step 101, displaying a target interface, wherein the target interface comprises a first picture display area and a second picture display area.

In the embodiment of the application, a first terminal displays a target interface, and a first picture display area and a second picture display area are displayed in the target interface.

In a possible implementation manner, the target interface includes at least one second screen display area, and the first screen display area and the at least one second screen display area are displayed according to a fixed arrangement position.

Illustratively, when the application is in a live broadcast field, a live broadcast application program can be installed on the first terminal, and the first terminal can perform stream pushing through the live broadcast application program. The first picture display area is used for displaying a part of pictures in the live pictures of the first terminal side, and the second picture display area is used for displaying a part of pictures in the live pictures of the second terminal side.

The target interface displayed on the first terminal can be a live interface, and the arrangement positions of the first picture display area in the live interface and the at least one second picture display area are the same as the arrangement positions of the picture display areas in the live interface displayed on the at least one second terminal and the arrangement positions of the picture display areas in the live interface displayed on the at least one third terminal.

In one possible implementation, the target interface is presented in response to receiving a target trigger operation.

In the field of live broadcasting, the target trigger operation may be a trigger operation performed on a target control initiating a video microphone connecting request.

That is to say, in response to the first terminal receiving the target trigger operation, sending request information for requesting to connect to the microphone to the server, the server sending the request information to the corresponding second terminal, and in response to the server receiving the request response information sent by the second terminal, the first terminal may display a first screen display area including a first screen display area corresponding to the first terminal and a second screen display area including a second screen display area corresponding to the second terminal.

For example, the anchor a at the first terminal side initiates a video microphone connecting request to the anchor B at the second terminal side and the anchor C at the second terminal side, if the anchor B and the anchor C both agree with the received microphone connecting request, the first terminal displays each picture display area in the live interface according to a fixed arrangement position, wherein the fixed arrangement position may be that the first picture display area corresponding to the anchor a at the microphone connecting function is arranged at a first position according to a left-to-right sequence, then the second picture display area is adjacent to the first picture display area, the arrangement sequence corresponding to the second picture display area may be determined by a sequence of agreeing to the microphone connecting request, because the anchor B agrees to the video microphone connecting request sent by the anchor C prior to the anchor C, the second picture display area corresponding to the anchor B is arranged next to the first picture display area, next to the second picture presentation area corresponding to anchor B, ranked third, is the second picture presentation area corresponding to anchor C.

102, displaying a first picture part in a first target picture in a first picture display area; the first target picture is a shooting picture at the first terminal side, and the mapping position of the second picture part in the first target picture in the target interface is in the second picture display area.

In the embodiment of the application, the first terminal is provided with a camera module, a first target picture at the first terminal side can be obtained through the camera module, and the first terminal displays a first picture part in the first direct-playing picture on a first picture display area in a target interface.

In one possible implementation, the second picture portion is determined by a position of the second picture presentation area relative to the first picture presentation area.

Illustratively, in response to the second screen presentation area being located immediately to the right of the first screen presentation area, there is a first screen portion on the right side of the first target screen; in response to the second picture presentation area being positioned immediately to the left of the first picture presentation area, there is a first picture portion to the left of the first target picture; in response to the second frame presentation area being positioned immediately above the first frame presentation area, there is a first frame portion above the first live frame; in response to the second picture presentation area being located immediately below the first picture presentation area, there is a first picture portion below the first target picture.

The second picture portion removed from the first target picture may be a first picture portion, and the first picture portion is a first target picture displayed in the first picture display area.

103, displaying a partial picture in the second target picture in the second picture display area; the second target screen is a captured screen on the second terminal side.

In the embodiment of the application, the first terminal displays part of the images in the second target images in at least one second image display area in the target interface.

Wherein the partial screen displayed in the second target screen in the second screen presentation area may be determined by a position of the second screen presentation area with respect to other screen presentation areas.

For example, in response to the first screen display area or the other second screen display area existing in the position adjacent to the certain direction of the second screen display area, it is determined that the second screen portion exists on the side of the direction of the second target screen, and the portion of the second target screen excluding the second screen portion may be determined as the partial screen in the second target screen displayed in the second screen display area.

And 104, in response to the first target object existing in the second picture part, displaying at least one of the first target object in the second picture part and the avatar corresponding to the first target object in the second picture part in an overlapping manner in the second picture display area, wherein the first target object is the target object in the first target picture.

In the embodiment of the application, when the first terminal detects that the first target object exists in the second picture part in the first target picture, the avatar corresponding to the first target object is determined, and the avatar is displayed in a superposition mode at the mapping position of the first target object in the second picture display area.

In a possible implementation manner, the manner of synchronously displaying the pictures in the first picture display area and the pictures in the second picture display area in each terminal is realized by mixing flows through a client, or by mixing flows through a server.

For example, when the application is applied to a scenario in which at least two users conduct a network video call, the first terminal may be an initiator terminal of the network video call, the displayed target interface may be a call interface, the call interface comprises a first picture display area and a second picture display area, wherein a part of pictures in a user call picture shot by a first terminal are displayed in the first picture display area, the second picture display area is used for displaying a part of user call pictures shot by a terminal corresponding to at least one user receiving a call request, when a first target object exists in the second picture part in a first target picture, at least one of the first target object and the corresponding avatar is displayed in a second picture display area in an overlapping manner, therefore, interaction between pictures corresponding to all the users who carry out communication in the scene that the users carry out network video communication is realized.

To sum up, the picture displaying method provided in the embodiment of the present application obtains the second picture portion in the first target picture, and if the first target object exists in the second picture portion, displays at least one of the first target object and the corresponding avatar on the partial picture of the second target picture displayed in the second picture displaying area in an overlapping manner, and because the second picture portion is in the first target picture and the mapping position is on the second picture displaying area, the interaction manner between the first target picture and the second target picture is expanded by directly or indirectly displaying the first target object in the second picture portion on the mapping position of the second picture displaying area, and when the picture displaying method is applied to the live broadcast field, the interaction effect between the anchor broadcasts in the live broadcast process is improved.

It should be noted that the picture display method provided by the present application may be applied to a scene in which a live application program implements a function of connecting a microphone video with other live terminals, and may also be applied to other application scenes involving simultaneous display of live pictures of a plurality of terminals, and the application scenes are not limited by the present application.

Fig. 2 shows a flowchart of a screen displaying method provided by an exemplary embodiment of the present application, where the screen displaying method may be executed by a computer device, where the computer device may be implemented as a terminal, and the screen displaying method is described as an example of the method executed by a first terminal, as shown in fig. 2, and may include the following steps:

step 201, displaying a target interface.

In this embodiment, the first terminal displays a target interface, and the target interface at least includes a first screen display area for displaying a first screen portion in a first target screen at the first terminal side, and a second screen display area for displaying a first screen portion in a second target screen at the second terminal side.

In one possible implementation, when the first terminal receives a target trigger operation, the target interface is displayed.

Step 202, displaying a first frame portion in the first target frame in the first frame display area.

In this embodiment of the application, a camera module exists on the first terminal, a first target picture at the first terminal side is acquired through the camera module, the first target picture may include a first picture portion and a second picture portion, and the first terminal displays the first picture portion in the first target picture in a first picture display area in a displayed target interface.

The first target picture can be a shooting picture at the first terminal side, and the mapping position of the second picture part in the first target picture in the target interface is in the second picture display area.

In one possible implementation manner, a display area and an interaction area exist in the first target screen, the display area and the interaction area are preset areas with specified positions and sizes, a screen part in the display area serves as a first screen part, and a screen part in the interaction area serves as a second screen part.

The part, located in the display area, of the first target picture can be displayed in the first picture display area on the target interface; the portion of the first target screen located in the interactive area may be a second screen portion that is not directly displayed on the target interface.

In a possible implementation manner, the display area is located in a middle area of the first target picture, and the interaction area is located in an edge area of the first target picture.

For example, in response to the first target screen being a rectangular screen, the display area may be a rectangular area having a specified distance from the four upper, lower, left, and right frames; the display area comprises a display area, a first picture, a second picture, a third picture and a fourth picture, wherein an area formed by connecting lines of a top left vertex of the display area, a top right vertex of the display area, a top left vertex of the first picture and a top right vertex of the first picture is used as an interaction area above the first picture; an area formed by connecting the upper left vertex of the display area, the lower left vertex of the display area, the upper left vertex of the first picture and the lower left vertex of the first picture is used as an interactive area on the left side of the first picture; an area formed by connecting the left lower vertex of the display area, the right lower vertex of the display area, the left lower vertex of the first picture and the right lower vertex of the first picture is used as an interactive area below the first picture; and an area formed by connecting the upper right vertex of the display area, the lower right vertex of the display area, the upper right vertex of the first picture and the lower right vertex of the first picture is used as an interactive area on the right side of the first picture.

In response to the first picture being a circular picture, the display area may be a circular area having the same center as the first picture and a radius smaller than the radius of the first picture; the part of the first picture excluding the display area can be equally divided into interactive areas corresponding to four directions, namely, up, down, left and right, based on the angle.

For example, when the target interface includes a first screen display area and a second screen display area, and the first screen display area is located on the left side of the second screen display area, fig. 3 is a schematic diagram of a first target screen and a second target screen according to an embodiment of the present disclosure. As shown in fig. 3, when the method is applied to the live broadcast field, the first target picture 31 is a live broadcast picture acquired by the first terminal through the camera module, wherein a first target portion in the display area 311 of the first target picture 31 may be directly displayed in a first picture display area in the target interface, a second picture portion in the interaction area 312 of the first target picture 31 is not directly displayed in the first picture display area, and a corresponding avatar may be displayed in the second picture display area in an overlapping manner according to a processing performed in a subsequent step; the second target picture 32 is a live-broadcast picture collected by the second terminal through the camera module, wherein the first picture part in the display area 321 of the second target picture 32 can be directly displayed in the second picture display area in the target interface, the second picture part in the interaction area 322 of the second target picture 32 is not directly displayed in the first picture display area, and the corresponding avatar can be displayed in the first picture display area in an overlapping manner according to the processing of the subsequent steps.

In step 203, a partial picture in the second target picture is displayed in the second picture display area.

In the embodiment of the application, the first terminal displays the first picture part in the second target picture in the second picture display area based on the received second target pictures pushed by the second terminal.

In one possible implementation, in response to the second screen portion in the first target screen not having the first target object, the first screen portion in the first target screen and the first screen portion in the second target screen are directly displayed on the target interface.

Step 204, performing image recognition on the second picture portion, and dividing the second picture portion into a background portion and a target contour portion.

In the embodiment of the application, the first terminal performs image recognition on the second screen portion in the acquired first target screen, acquires an image recognition result, the image recognition result may include a recognition object existing in the second image portion and position information of the recognition object in the second screen portion, and divides the second screen portion into a background portion and a target contour portion based on the image recognition result.

The first target object can be a preset identification object of a specified type or a specified part of the identification object, and the identification object can be a human body or a specified article obtained by image recognition; in the case where the recognition target is a human body, the designated part may be a designated body part of the human body, such as a hand, a face, or the like.

The target contour portion may be a contour portion corresponding to the detected recognition object, and the background portion may be a portion of the second frame portion excluding the recognized recognition object.

In a possible implementation manner, if the recognition object in the second picture portion is obtained in response to the image recognition result received by the first terminal, the contour of the recognized recognition object is taken as a target contour portion, the target contour portion is extracted from the second picture portion, then a portion of the second picture portion excluding the target contour portion is taken as a background portion, and the background portion is removed or set to be transparent.

Exemplarily, fig. 4 is a schematic diagram of a second picture part division according to an embodiment of the present application. When applied in a live scene, as shown in fig. 4, the second frame portion 41 in the first target frame can obtain the target contour portion 42 and the background portion 43 by the human body edge detection technique.

Step 205, based on the target contour portion, determining the recognition object corresponding to the target contour portion.

In the embodiment of the application, the first terminal performs image recognition on the target contour portion, and may determine a recognition object corresponding to the target contour portion or a designated portion on the corresponding recognition object.

In a possible implementation manner, the first terminal performs edge enhancement processing on the target contour part to obtain the processed target contour part; and determining the recognition object based on the processed target contour part.

For example, when the image is identified and processed, the contour edge of the target contour portion determined by the human body edge detection may not be clear enough, so that the edge enhancement processing is performed on the target contour portion to obtain the processed target contour portion.

And step 206, in response to the identification object being the first target object, displaying at least one of the first target object in the second picture part and the avatar corresponding to the first target object in the second picture display area in an overlapping manner.

In the embodiment of the application, when it is determined that the recognition object corresponding to the target contour portion is the first target object, the first target object is displayed in a superimposed manner in the second picture display area, or an avatar corresponding to the target contour portion is determined, and then the avatar corresponding to the first target object in the second picture portion is displayed in the superimposed manner in the second picture display area, or the first target object in the second picture portion and the avatar corresponding to the first target object are displayed in the superimposed manner in the second picture display area.

After the second image portion is divided into the background portion and the target contour portion through the above steps, the target contour portion is recognized, and the corresponding recognition object is determined, if the recognition object is the first target object, an image corresponding to the first target object in the second image portion may be directly displayed in the second image display area in an overlapping manner, or, in response to the recognition object being the first target object, the first terminal may also obtain an avatar corresponding to the first target object based on a pre-stored correspondence relationship or a designated algorithm, and then display the avatar in the second image display area in an overlapping manner.

In one possible implementation manner, in response to that the identification object is a first target object, acquiring an avatar corresponding to the first target object based on a pre-stored correspondence; and displaying the virtual image corresponding to the first target object in the second picture display area in an overlapping manner.

In another possible implementation manner, in response to that the identification object is a first target object, generating an avatar corresponding to the first target object based on a specified algorithm; and displaying the virtual image corresponding to the first target object in the second picture display area in an overlapping manner.

Wherein the pre-stored correspondence may include a correspondence between the target object and the avatar. The pre-stored corresponding relation can be stored on the terminal, and also can be stored in a database of the server, so that the terminal can inquire and call. The specified algorithm for generating the avatar corresponding to the first target object may be a caricature generation algorithm.

For example, when the target object is a hand of a human body, the corresponding avatar which is the cat-paw avatar can be obtained based on the pre-stored correspondence, and the cartoon avatar corresponding to the hand can be generated based on the designated algorithm; when the target object is a face of a human body, the corresponding virtual image obtained based on the pre-stored correspondence may be a pre-stored avatar, and the caricature image corresponding to the face may be generated based on a specified algorithm. The avatar generated by the specified algorithm may have the relevant characteristics of the first target object.

In one possible implementation manner, in response to the recognition object being the target object, acquiring an avatar corresponding to the target object based on a pre-stored avatar material library; and displaying the virtual image corresponding to the first target object in the second picture display area in an overlapping manner, and displaying the virtual image corresponding to the second target object in the first picture display area in an overlapping manner.

After the edge enhancement processing is performed on the target contour part, the contour similarity between the target contour part and each avatar stored in the avatar material library can be determined, and then the avatar with the highest contour similarity can be determined as the avatar corresponding to the target contour part.

In one possible implementation manner, in response to that the second picture portion has the first target object and the target interface includes at least two second picture display areas, the target picture display area is determined based on the mapping position of the second picture portion in the target interface, and at least one of the first target object in the second picture portion and the avatar corresponding to the first target object is displayed in the target picture display area in an overlapping manner.

For example, there may be a case where a first target object in the second screen portion and an avatar corresponding to the first target object are displayed in a superimposed manner in the target screen display area at the same time. In the live broadcasting process, when the first target object in the second picture part is a main broadcasting arm part on the first terminal side, a real arm image can be displayed in a target picture display area in an overlapping mode, and meanwhile, an avatar corresponding to a hand can be displayed in an overlapping mode near an area corresponding to the hand based on a pre-stored corresponding relation or a designated algorithm. The purpose of combining reality with interestingness is achieved, and the interaction effect in the live broadcast process is improved.

Wherein the target picture display area is at least one of the at least two second picture display areas.

In one possible implementation, in response to the interaction region including an upper interaction region, a lower interaction region, a left interaction region, and a right interaction region, it may be determined from the first position information whether the second screen portion is located in the upper interaction region, the lower interaction region, the left interaction region, or the right interaction region.

Wherein the first location information may be used to indicate a location of the second picture portion in the first target picture.

Since the arrangement positions of the first screen display area and the at least one second screen display area on the target interface are fixed, the target screen display area to which the second screen is partially mapped may be determined based on the first position information.

Illustratively, the first frame display area corresponding to the first terminal a is located in the first row of the target interface from the left side; a second picture display area corresponding to the second terminal B is positioned in the first row of the target interface from the left side to the second side, and the left frame of the second picture display area corresponding to the second terminal B is spliced with the right frame of the first picture display area corresponding to the first terminal A; a second picture display area corresponding to the second terminal C is positioned in a second row of the target interface and is first from the left side, an upper frame of the second picture display area corresponding to the second terminal C is spliced with a lower frame of the first picture display area corresponding to the first terminal A, and at the moment, when the first position information indicates that the second picture part is in a lower interactive area in the first target picture, the target picture display area mapped by the second picture part is determined to be the second picture display area corresponding to the second terminal C; and when the first position information indicates the right interaction area of the second picture part in the first target picture, determining that the target picture display area mapped by the second picture part is the second picture display area corresponding to the second terminal B.

In one possible implementation manner, in response to that the first target object contains the target part and the partial picture in the second target picture comprises the target area, the special effect animation is displayed on the target interface.

Performing image recognition on a part of the second target picture displayed in the second picture display area in response to the first target object containing the target part; acquiring a target area on a partial picture in a second target picture; the target area is an area corresponding to the specified part obtained by image recognition; and displaying the special effect animation in response to the fact that the target part contained on the first target object and the target area in the second target picture meet the specified position relation.

The target region may be a region corresponding to a designated portion obtained by performing image recognition. The specified positional relationship may be that the target portion is located on the target area, or that the distance between the target portion and the target area is within a specified threshold interval.

In a possible implementation manner, in response to that the first target object includes a target part, performing image recognition on a partial picture in a second target picture displayed in the second picture display area, and acquiring a target area on the partial picture in the second target picture; and displaying the special effect animation at the position where the target part is rendered on the second live broadcast picture in response to the target part contained in the first target object being superposed in the target area in the second live broadcast picture.

The target region is a region corresponding to a designated part obtained by image recognition, and the designated part may be a preset designated part, such as a face part, an arm part, a circular part in a picture after image recognition, a rectangular part, and the like.

For example, the target area may be an area where a face is located, which is determined after face recognition is performed on the picture in the second picture display area. When the hand of the anchor a is taken as the first target object, if the face area in the picture displayed in the second picture display area at the mapping position of the hand of the anchor a in the second picture display area is determined, the corresponding special effect animation can be added at the contact position of the hand and the face area when the virtual image corresponding to the hand is superposed in the second picture display area. In this way, the interactivity of the connected-to-the-wheat video between the anchor broadcasters can be enhanced.

Illustratively, fig. 5 is a schematic diagram of a screen interaction of a superimposed avatar according to an embodiment of the present application. As shown in fig. 5, when the method is applied to the live broadcast field, the live broadcast interface 51 is a live broadcast interface watched by a viewer of the anchor a through a live broadcast room of the anchor a, a picture display area corresponding to the anchor a and a picture display area corresponding to the anchor B exist in the live broadcast interface 51, when the anchor a places a hand in an interaction area of a live broadcast picture corresponding to the anchor a, edge recognition and edge enhancement processing are performed on the hand in the interaction area, an avatar corresponding to the hand contour is determined based on the processed hand contour, the avatar is displayed at a mapping position of the hand contour in the picture display area of the anchor B, and at this time, the viewer-side terminal displays the live broadcast interface 52.

In one possible implementation, before the first terminal displays the avatar of the first target object in the second screen portion in the target screen display area in an overlaid manner, the first terminal transmits the first screen portion in the first target screen, the avatar of the first target object, and the mapping position of the avatar of the first target object in the second screen display area to the server, so that the second terminal acquires the first screen portion in the first live view, the avatar of the first target object, and the mapping position of the avatar of the first target object in the second screen display area from the server and displays the first screen portion, the second screen portion, and the avatar of the first target object.

In one possible implementation, in response to receiving the second picture portion, the avatar of the second target object, and the mapped location of the avatar of the second target object in the first picture presentation area sent by the server, the first picture portion and the avatar of the second target object are presented in the first picture presentation area, and the second picture portion is presented in the second picture presentation area.

That is, in response to the mapping position of the second screen portion in the first target screen in the target interface being within the second screen display area and the mapping position of the second screen portion in the second target screen in the target interface being within the first screen display area, if the target object exists in both the second screen portion in the first target screen and the second screen portion in the second target screen, at least one of the corresponding target object and the avatar corresponding to the target object is simultaneously displayed in the first screen display area and the second screen display area.

In one possible implementation manner, in response to the second target object existing in the first picture portion, at least one of the second target object in the first picture portion and the avatar corresponding to the second target object is displayed in the first picture display area in an overlapping manner.

Wherein the second target object is a target object in the second target picture.

That is, the image of the display area corresponding to the other terminal entered in the image of the first terminal is processed by the first terminal and then synchronized to the other terminal, and correspondingly, the image of the display area corresponding to the first terminal entered in the image of the other terminal is processed by the other terminal and then synchronized to the first terminal.

Illustratively, when the anchor video for the live broadcasting includes an anchor A corresponding to the first terminal and an anchor B corresponding to the second terminal, if a first target object with a mapping position in the second picture display area exists in a picture corresponding to the anchor A obtained by live broadcasting on the first terminal side, the first target object is processed by the first terminal to obtain a corresponding avatar, the first terminal sends the avatar, the mapping position of the avatar in the second picture display area and a first picture part in the first live broadcasting picture to the server, if a second target object with a mapping position in the first picture display area exists in a picture corresponding to the anchor B obtained by live broadcasting on the second terminal side, the second terminal processes the second target object to obtain a corresponding avatar, and the second terminal sends the second target object with a mapping position in the first picture display area to the server, The mapping position of the avatar in the first picture display area and the second picture portion in the second live picture are sent to the server. In one case, the server generates a composite picture including the first picture portion, the second picture portion, the avatar corresponding to the first target object, and the avatar corresponding to the second target object, and the server pushes the composite picture to the first terminal, the second terminal, and the viewer terminal, and the terminals perform synchronous display. In another case, the first terminal may further acquire the second picture portion, the avatar corresponding to the second target object, and the corresponding mapping location from the server, then, a composite screen including the first screen portion, the second screen portion and the superimposed avatar is directly generated on the terminal side of the first terminal, the second terminal can acquire the avatar of the first target object, the mapped position of the avatar in the second screen presentation area and the first screen portion from the server, the synthesized picture is generated at the terminal side of the second terminal, and the spectator terminal can obtain the avatars corresponding to the first target object and the second target object respectively, the mapping positions of the avatars in the first picture display area and the second picture display area respectively, the first picture part and the second picture part from the server, and generate the synthesized picture to be displayed by each spectator terminal at each spectator terminal side.

Illustratively, in response to the anchor performing the online video comprising anchor a and anchor B, the video frame picture pushed by anchor a to the server for anchor a comprises a first picture part directly displayed in the picture display area and a second picture part of the interactive area. Meanwhile, the terminal or the server can perform human body edge detection and edge enhancement processing on the second picture part of the anchor A, remove or change the detected background part except the human body into transparent color, and obtain the target contour part of the second picture part subjected to background removal and human body edge enhancement processing. Then, the human body part in the target contour part is subjected to quadratic element or virtual human imaging processing, so that the target contour part corresponding to the anchor A can be converted into an initial virtual image such as a quadratic element. The video frame picture pushed out by the terminal connected with the microphone corresponding to the anchor A can comprise a first picture part and a corresponding virtual image. Meanwhile, the video frame image pushed out each time also carries the position information of the hand and the face in the target contour part. For anchor B, when a terminal corresponding to the anchor B receives a first picture part, a corresponding virtual image and position information of a hand and a face in a target outline part pushed by the terminal corresponding to the anchor A, the virtual image is directly covered and rendered on the first image part corresponding to the anchor B, so that the effect of fusing the virtual image and the first image part corresponding to the anchor B is achieved, a picture displayed in an image display area corresponding to the anchor B after fusion is generated, and then the picture displayed in the image display area after fusion is combined with the first image picture corresponding to the anchor A to form a picture displayed in the picture display area in a live broadcast interface after wheat and wheat are mixed.

The method comprises the steps of obtaining position information of a face and a hand in a target outline part from a video frame picture pushed by a terminal corresponding to a main broadcast A, detecting whether the positions touch a face area of the main broadcast B in a picture displayed in an image display area corresponding to the main broadcast B after fusion or not based on the corresponding position information, and triggering a face special effect of the main broadcast B if the positions touch the face area of the main broadcast B. Therefore, the effect that the anchor A touches the face of the anchor B across the boundary to trigger the display effect of the special effect can be achieved. Similarly, the anchor B can also realize the display effect of triggering the special effect by touching the face of the anchor A across the boundary through the steps.

To sum up, the picture displaying method provided in the embodiment of the present application obtains the second picture portion in the first target picture, and if the first target object exists in the second picture portion, displays at least one of the first target object and the corresponding avatar on the partial picture of the second target picture displayed in the second picture displaying area in an overlapping manner, and because the second picture portion is in the first target picture and the mapping position is on the second picture displaying area, the interaction manner between the first target picture and the second target picture is expanded by directly or indirectly displaying the first target object in the second picture portion on the mapping position of the second picture displaying area, and when the picture displaying method is applied to the live broadcast field, the interaction effect between the anchor broadcasts in the live broadcast process is improved.

Fig. 6 is a block diagram illustrating a screen presentation apparatus according to an exemplary embodiment of the present application, and as shown in fig. 6, the screen presentation apparatus includes:

an interface display module 610, configured to display a target interface, where the target interface includes a first screen display area and a second screen display area;

a first screen displaying module 620, configured to display a first screen portion in a first target screen in the first screen displaying area; the first live-air picture is a shooting picture at the first terminal side, and the mapping position of a second picture part in the first target picture in the target interface is in the second picture display area;

a second screen displaying module 630, configured to display a partial screen in a second target screen in the second screen displaying area; the second target picture is a shooting picture at a second terminal side;

a first character overlaying module 640, configured to, in response to a first target object existing in the second screen portion, overlay and display at least one of the first target object in the second screen portion and an avatar corresponding to the first target object in the second screen portion in the second screen display area; the first target object is a target object in the first target screen.

In one possible implementation, the first avatar superimposition module 640 includes:

the area division submodule is used for carrying out image recognition on the second picture part and dividing the second picture part into a background part and a target outline part;

the object identification submodule is used for determining an identification object corresponding to the target contour part on the basis of the target contour part;

and an avatar overlaying sub-module, configured to, in response to determining that the identified object corresponding to the target contour portion is the first target object, overlay and display at least one of the first target object in the second picture portion and the avatar corresponding to the first target object in the second picture display area.

In one possible implementation manner, the object recognition sub-module includes:

the edge enhancement unit is used for carrying out edge enhancement processing on the target contour part to obtain the processed target contour part;

an object determination unit for determining the recognition object based on the processed target contour portion.

In one possible implementation, the character superimposition submodule includes:

a first character acquisition unit configured to acquire the avatar corresponding to the first target object based on a pre-stored correspondence or a prescribed algorithm in response to the recognition object being the first target object;

and the first image overlapping unit is used for overlapping and displaying the virtual image corresponding to the first target object in the second picture display area.

In one possible implementation, the first avatar superimposition module 640 includes:

a target area determination submodule, configured to determine, in response to that the first target object exists in the second picture portion and the target interface includes at least two second picture display areas, a target picture display area based on the mapping position of the second picture portion in the target interface; the target picture presentation area is at least one of the at least two second picture presentation areas;

and the superposition submodule is used for superposing and displaying at least one of the first target object in the second picture part and the virtual image corresponding to the first target object in the target picture display area.

In one possible implementation, the apparatus further includes:

the animation display module is used for responding to the situation that the first target object comprises a target part and part of the second target picture comprises a target area, and displaying a special effect animation on the target interface; the target region is a region corresponding to a designated portion obtained by image recognition.

In one possible implementation, the animation display module includes:

the recognition submodule is used for responding to the first target object containing the target part and carrying out image recognition on partial pictures in the second target picture displayed in the second picture display area;

a target area obtaining sub-module, configured to obtain the target area on a partial picture in the second target picture;

and the animation display submodule is used for displaying the special effect animation in response to that the target position contained in the first target object and the target area in the second target picture meet the specified position relation.

In one possible implementation, the apparatus further includes:

a data transmitting module for transmitting the first picture portion in the first target picture, the avatar of the first target object and the mapped position of the avatar of the first target object in the second picture display area to a server before at least one of the first target object in the second picture portion and the avatar corresponding to the first target object is displayed in an overlapping manner in the target picture display area, so that the second terminal acquires the first picture portion in the first target picture, the avatar of the first target object, and the mapped position of the avatar of the first target object in the second picture-presentation area from the server, and presenting the first picture portion, the second picture portion and an avatar of the first target object.

In one possible implementation, the apparatus further includes:

a data receiving module for displaying a first picture portion and an avatar of a second target object in the first picture display area and displaying the second picture portion in the second picture display area in response to receiving the second picture portion, the avatar of the second target object and a mapped position of the avatar of the second target object in the first picture display area transmitted by a server; the second target object is the target object in the second target screen.

To sum up, the picture displaying method provided in the embodiment of the present application obtains the second picture portion in the first target picture, and if the first target object exists in the second picture portion, displays at least one of the first target object and the corresponding avatar on the partial picture of the second target picture displayed in the second picture displaying area in an overlapping manner, and because the second picture portion is in the first target picture and the mapping position is on the second picture displaying area, the interaction manner between the first target picture and the second target picture is expanded by directly or indirectly displaying the first target object in the second picture portion on the mapping position of the second picture displaying area, and when the picture displaying method is applied to the live broadcast field, the interaction effect between the anchor broadcasts in the live broadcast process is improved.

FIG. 7 is a block diagram illustrating the structure of a computer device 700 according to an example embodiment. The computer device 700 may be a terminal as shown in fig. 1, such as a smartphone, tablet, or desktop computer. Computer device 700 may also be referred to by other names such as target user device, portable terminal, laptop terminal, desktop terminal, and the like.

Generally, the computer device 700 includes: a processor 701 and a memory 702.

The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.

Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement a method provided by method embodiments herein.

In some embodiments, the computer device 700 may also optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 704, a display screen 705, a camera assembly 706, an audio circuit 707, a positioning component 708, and a power source 709.

In some embodiments, the computer device 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.

Those skilled in the art will appreciate that the configuration illustrated in FIG. 7 is not intended to be limiting of the computer device 700 and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components may be employed.

Fig. 8 illustrates a block diagram of a computer device 800 according to an exemplary embodiment of the present application. The computer device may be implemented as a server in the above-mentioned aspects of the present application. The computer apparatus 800 includes a Central Processing Unit (CPU) 801, a system Memory 804 including a Random Access Memory (RAM) 802 and a Read-Only Memory (ROM) 803, and a system bus 805 connecting the system Memory 804 and the CPU 801. The computer device 800 also includes a mass storage device 806 for storing an operating system 809, application programs 810 and other program modules 811.

The mass storage device 806 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 806 and its associated computer-readable media provide non-volatile storage for the computer device 800. That is, the mass storage device 806 may include a computer-readable medium (not shown) such as a hard disk or Compact Disc-Only Memory (CD-ROM) drive.

Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM) flash Memory or other solid state Memory device, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 804 and mass storage device 806 as described above may be collectively referred to as memory.

The computer device 800 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the present disclosure. That is, the computer device 800 may be connected to the network 808 through the network interface unit 807 attached to the system bus 805, or may be connected to another type of network or remote computer system (not shown) using the network interface unit 807.

The memory further includes at least one instruction, at least one program, a code set, or a set of instructions, which is stored in the memory, and the central processing unit implements all or part of the steps in the screen presentation method shown in each of the above embodiments by executing the at least one instruction, the at least one program, the code set, or the set of instructions.

Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.

In an exemplary embodiment, a computer readable storage medium is further provided, which stores at least one instruction, at least one program, a code set, or a set of instructions, which is loaded and executed by a processor to implement all or part of the steps of the above-mentioned scene picture exhibition method. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.

In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform all or part of the steps of the method described in any of the embodiments of fig. 1 or fig. 2.

Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于FPGA和深度学习的老人室内跌倒检测方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!