Object state determination method and device, storage medium and electronic device

文档序号:9824 发布日期:2021-09-17 浏览:131次 中文

1. A method for determining a state of an object, comprising:

acquiring a video obtained by shooting a target object by a camera device;

acquiring sound obtained by acquiring the sound of the target object by sound acquisition equipment;

determining a target state of the target object based on the video and the sound.

2. The method of claim 1, wherein determining the target state of the target object based on the video and the sound comprises:

segmenting the video according to a preset time period to obtain a plurality of video segments;

determining a target sound clip corresponding to each of a plurality of video clips, wherein the sound clip is a sound clip collected within a time period for capturing the video clip;

analyzing the video clips and the target sound clips to determine the state of the target object in each time period so as to obtain a plurality of intermediate states;

determining a target state of the target object based on a plurality of the intermediate states.

3. The method of claim 2, wherein analyzing the video clip and the target sound clip to determine the intermediate state of the target object for each time period comprises:

determining a target video segment included in a plurality of channel videos, which is photographed at the same time, in a case where the video includes the plurality of channel videos;

identifying a first video clip to determine an expression of the target object, wherein the first video clip is a clip included in the target video clip;

identifying a second video segment to determine an action of the target object, wherein the second video segment is a segment included in the target video segment;

analyzing and processing the target sound to determine the characteristics of the target sound;

determining an intermediate state of the target object based on the expression, the action, and the target sound feature.

4. The method of claim 3, wherein determining the intermediate state of the target object based on the expression, the action, and the target sound feature comprises:

determining a first state of the target object based on the expression;

determining a second state of the target object based on the action;

determining a third state of the target object based on the target sound feature;

determining an intermediate state of the target object based on the first state, the second state, and the third state.

5. The method of claim 4, wherein determining the intermediate state of the target object based on the first state, the second state, and the third state comprises:

determining each sub-state included in the first state, the second state and the third state respectively;

counting the number of each sub-state;

and sequencing the sub-states in a descending order of the number, and determining the sub-states with the preset number as the middle state of the target object.

6. The method according to claim 1, wherein acquiring the video obtained by the image pickup apparatus shooting the target object comprises:

acquiring a first video obtained by shooting a first part of the target object by first camera equipment included in the camera equipment;

acquiring a second video obtained by shooting a second part of the target object by second camera equipment included in the camera equipment;

determining the first video and the second video as the video.

7. The method of claim 1, wherein after determining the state of the target object based on the video and the sound, the method further comprises:

determining an evaluation result of the target object based on the state of the target object;

and executing a prompting operation when the evaluation result meets a preset condition.

8. An apparatus for determining a state of an object, comprising:

the first acquisition module is used for acquiring a video obtained by shooting a target object by the camera equipment;

the second acquisition module is used for acquiring sound acquired by acquiring the sound of the target object by the sound acquisition equipment;

a determination module to determine a state of the target object based on the video and the sound.

9. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.

10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.

Background

In the process of communicating with the subject, the state of the subject is usually determined, and the following description is given by taking interviews as an example:

with the development of economy, the HR demands more and more on interviewers, and many interviewers also have exaggerated their actual abilities when they apply. In order to be able to discern the psychology of the interviewer in dealing with the HR problem, the most suitable candidate is found. Usually, the recording of the interviewee is recorded, and the recording is analyzed to determine the state of the object, but the analysis of the recording cannot determine whether the interviewee is listening seriously, whether the interviewee is thinking, and whether the answer to the interviewee is a conclusion drawn by thinking or a conclusion drawn by a back answer.

Therefore, the related art has the problem that the state of the object is not accurately determined.

In view of the above problems in the related art, no effective solution has been proposed.

Disclosure of Invention

The embodiment of the invention provides a method and a device for determining the state of an object, a storage medium and an electronic device, which are used for at least solving the problem that the state of the object is inaccurate in determination in the related art.

According to an embodiment of the present invention, there is provided a method for determining a state of an object, including: acquiring a video obtained by shooting a target object by a camera device; acquiring sound obtained by acquiring the sound of the target object by sound acquisition equipment; determining a target state of the target object based on the video and the sound.

According to another embodiment of the present invention, there is provided an apparatus for determining a state of an object, including: the first acquisition module is used for acquiring a video obtained by shooting a target object by the camera equipment; the second acquisition module is used for acquiring sound acquired by acquiring the sound of the target object by the sound acquisition equipment; a determination module to determine a target state of the target object based on the video and the sound.

According to yet another embodiment of the invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program, when executed by a processor, implements the steps of the method as set forth in any of the above.

According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.

According to the invention, the video obtained by shooting the target object by the camera shooting equipment is acquired, the sound obtained by collecting the sound of the target object by the sound collecting equipment is acquired, and the state of the target object is determined according to the video and the sound. The state of the object can be determined by integrating sound and video, so that the problem of inaccurate state determination of the object in the related art can be solved, and the accuracy rate of determining the state of the object is improved.

Drawings

Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a method for determining an object state according to an embodiment of the present invention;

FIG. 2 is a flow chart of a method of determining a status of an object according to an embodiment of the invention;

FIG. 3 is a flow chart of a target state for determining the target object based on video and sound according to an exemplary embodiment of the present invention;

FIG. 4 is a flowchart of an intermediate state of analyzing a video segment and a target sound segment to determine a target object for each time period, according to an exemplary embodiment of the present invention;

FIG. 5 is a flowchart of an intermediate state for determining a target object based on expressions, actions, and target sound characteristics, according to an illustrative embodiment of the present invention;

FIG. 6 is a flowchart of an intermediate state for determining a target object based on a first state, a second state, and a third state, according to an illustrative embodiment of the present invention;

fig. 7 is a flowchart for acquiring a video obtained by an image capturing apparatus capturing a target object according to an exemplary embodiment of the present invention;

FIG. 8 is a flowchart of a method of determining a status of an object, according to an exemplary embodiment of the present invention;

FIG. 9 is a flowchart of a method for determining the status of an object, in accordance with a specific embodiment of the present invention;

fig. 10 is a block diagram of a structure of an object state determination apparatus according to an embodiment of the present invention.

Detailed Description

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.

It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.

The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking an example of the method running on a mobile terminal, fig. 1 is a hardware structure block diagram of the mobile terminal of a method for determining an object state according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.

The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the object state determination method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.

In the present embodiment, a method for determining a status of an object is provided, and fig. 2 is a flowchart of the method for determining a status of an object according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:

step S202, acquiring a video obtained by shooting a target object by the camera equipment;

step S204, acquiring sound acquired by acquiring the sound of the target object by sound acquisition equipment;

step S206, determining the target state of the target object based on the video and the sound.

In the above-described embodiments, the image pickup apparatus may be a monitoring apparatus installed in a target area, the target object may be a person, and the sound collection apparatus may be a microphone, a recording apparatus, or the like installed in the target area. The state of the target object may include a psychological state of the target object, such as tension, excitement, worry, lie, panic, and the like.

In the above-described embodiments, the expression, motion, or the like of the target object may be determined by a video obtained by photographing the target object by the image pickup apparatus. And determining the state of the object according to the expression and the action of the target object and combining the sound.

In the above embodiment, the acquired video may be a recorded video, or a video acquired in real time. And when the video is the recorded video, dividing the video, and respectively determining the psychological state of the target object in each time period. When the video is a real-time video, analyzing the psychological state of the target object in real time through the video, analyzing the state of the target object in real time according to the collected sound, and comprehensively determining the real-time psychological state of the target object according to the state determined by the video and the state determined by the sound.

Optionally, the main body of the above steps may be a background processor, or other devices with similar processing capabilities, and may also be a machine integrated with at least an image acquisition device, a sound acquisition device, and a data processing device, where the image acquisition device may include a graphics acquisition module such as a camera, the sound acquisition device may include a sound acquisition module such as a microphone, and the data processing device may include a terminal such as a computer and a mobile phone, but is not limited thereto.

According to the invention, the video obtained by shooting the target object by the camera shooting equipment is acquired, the sound obtained by collecting the sound of the target object by the sound collecting equipment is acquired, and the state of the target object is determined according to the video and the sound. The state of the object can be determined by integrating sound and video, so that the problem of inaccurate state determination of the object in the related art can be solved, and the accuracy rate of determining the state of the object is improved.

In an exemplary embodiment, a flow chart for determining a target state of the target object based on the video and the sound may refer to fig. 3, as shown in fig. 3, the flow chart includes:

step S302, segmenting the video according to a preset time period to obtain a plurality of video segments;

step S304, determining a target sound segment corresponding to each video segment included in a plurality of video segments, wherein the sound segment is a sound segment collected in a time period for shooting the video segments;

step S306, analyzing the video clip and the target sound clip to determine the state of the target object in each time period to obtain a plurality of intermediate states;

step S308, determining a target state of the target object based on the plurality of intermediate states.

In the above embodiment, the video may be segmented according to a predetermined time period to obtain a plurality of video segments, and the target sound segment acquired when each video segment is shot is determined. The plurality of video segments and the plurality of sound segments are analyzed to determine an intermediate state of the object for each time period, and a target state of the target object is determined based on the plurality of intermediate states. The predetermined time period may be 30s, 1min, 5min, and the like, which is not limited in the present invention.

In the above embodiment, when capturing a target object, the camera device and the sound collection device start recording, and ensure normal operation, and automatically cut into different small segments after recording; performing local analysis according to different small fragments; by integrating the results of the local analysis, a global analysis is derived.

In an exemplary embodiment, a flow chart for analyzing the video segments and the target sound segments to determine the intermediate state of the target object in each time period may refer to fig. 4, as shown in fig. 4, where the flow chart includes:

a step S402 of determining a target video segment included in a plurality of channel video segments and shot at the same time, in a case where the video includes a plurality of channel videos;

step S404, identifying a first video clip to determine the expression of the target object, wherein the first video clip is a clip included in the target video clip;

step S406, identifying a second video clip to determine the action of the target object, wherein the second video clip is a clip included in the target video clip;

step S408, analyzing and processing the target sound to determine the characteristics of the target sound;

step S410, determining the intermediate state of the target object based on the expression, the action and the target sound characteristic.

In the above-described embodiment, the image pickup apparatus may include a plurality of image pickup apparatuses, and different image pickup apparatuses may photograph different parts of the target object to obtain a plurality of videos. And dividing the plurality of videos according to a preset time period to determine a plurality of video segments. Determining a target video clip shot at the same time in the plurality of video clips, identifying a first video clip included in the target video clip to determine the expression of a target object, identifying a second video clip included in the target video clip to determine the action of the target object, analyzing and processing the sound collected at the same time to determine the sound characteristic, and determining the intermediate state of the object in each time period according to the expression, the action and the sound characteristic of the object. The analysis processing may adopt a machine learning manner, the initial model is trained by using a plurality of sets of training data to obtain a target model, the target sound is input to the target model to be analyzed to obtain a target sound characteristic of the target sound, and each set of training data in the plurality of sets of training data includes sound and sound characteristics. Of course, the analysis processing may also include other processing manners, for example, inputting the target sound into the analysis software to obtain the target sound characteristic, and the analysis processing manner is not limited in the present invention.

In the above embodiment, the time and image at which the first image capturing apparatus captures the micro expression may be determined; capturing time and images of the gesture/posture action by the second camera shooting device; determining the loudness and timbre of the sound. And tone, pitch, pace, speech interval under normal conditions; respectively collecting the data according to the parameters, then carrying out image segmentation, segmenting the data into a segment 1, a segment 2 and a segment 3, and arranging the segments in parallel according to time; meanwhile, the voice recording is used as a comparison reference. Therefore, the psychological changes of the interviewer are judged from different angles in the same time period, and the accurate current psychological state is found by integrating different answers.

In the above embodiment, the image capturing apparatus may further include a function of determining an expression, an action, and the like of the target object from an image captured by one image capturing apparatus. When the number of the camera devices is one, the collected video files are also one, the video is divided according to the preset time to obtain a plurality of video segments, each segment is analyzed independently, the expression and the action of the target object in each segment are determined according to each segment, and the intermediate state of the target object in each segment is determined comprehensively by combining the sound collected at the same time.

In an exemplary embodiment, a flow chart for determining the intermediate state of the target object based on the expression, the action and the target sound feature may refer to fig. 5, as shown in fig. 5, the flow chart includes:

step S502, determining a first state of the target object based on the expression;

step S504, determining a second state of the target object based on the action;

step S506, determining a third state of the target object based on the target sound characteristic;

step S508, determining an intermediate state of the target object based on the first state, the second state and the third state.

In this embodiment, the first state of the target object may be analyzed by expression analysis, the second state of the target object may be analyzed by motion, and the third state of the target object may be analyzed by target sound characteristics. The first state, the second state, and the third state are comprehensively analyzed to determine an intermediate state of the target object.

In an exemplary embodiment, a flowchart for determining an intermediate state of the target object based on the first state, the second state and the third state may refer to fig. 6, as shown in fig. 6, where the flowchart includes:

step S602, determining each sub-state included in the first state, the second state, and the third state, respectively;

step S604, counting the number of each sub-state;

step S606, sorting the sub-states in descending order of number, and determining the sub-states with the preset number as the intermediate state of the target object.

In the present embodiment, for example, the first state of the target object is determined by the image segment 1 to include tension, excitement, dismissal and worries; determining the second state of the target object by the image segment 2, wherein the second state comprises tension, the possibility of lying and panic; determining the third state of the target object by the image fragment 3, wherein the third state comprises the possibility of activation, panic and lie; at this time, a decision can be made: tensioning for 2 times; exciting for 1 time; paniculate to the abdomen for 3 times; worry about 1 time; the possibility of lying is 2 times; therefore, it can be concluded that the intermediate state of the target object is panic, tension, and thus the possibility that the target object is lie is determined. Judging by combining the recording of the recorder in a certain time interval, if the loudness is moderate, but the tone is relatively flat and has abnormal changes, the tone has periodic changes, the speech speed is high, and the speaking interval is short, then the interviewee is considered to have the possibility of lying and tension, and dismissal is eliminated because the loudness is always; it was therefore concluded that: the interviewer has the possibility of lying and is stressed.

In an exemplary embodiment, a flowchart for acquiring a video obtained by shooting a target object by an image capturing apparatus may be referred to in fig. 7, and as shown in fig. 7, the flowchart includes:

step S702, acquiring a first video obtained by shooting a first part of the target object by a first camera device included in the camera device;

step S704, acquiring a second video obtained by shooting a second part of the target object by a second image capturing apparatus included in the image capturing apparatus;

step S706, determining the first video and the second video as the video.

In this embodiment, the image capturing apparatus may include a plurality of apparatuses, and for example, a first part of the target object, which may be a face of the target object, may be captured by the first image capturing apparatus. A second part of the target object is photographed by the second photographing apparatus, wherein the second part may be a hand, a leg, or the like of the target object. Videos captured by the plurality of image capturing apparatuses are determined as videos. It should be noted that the first image pickup apparatus and the second image pickup apparatus may include a plurality of image pickup apparatuses, each of which is different in a portion to be picked up.

In an exemplary embodiment, a flowchart of the method for determining the object state may refer to fig. 8, as shown in fig. 8, the flowchart includes all the steps shown in fig. 2, in addition to:

step S802, determining the evaluation result of the target object based on the state of the target object;

and step S804, executing a prompting operation under the condition that the evaluation result meets a preset condition.

In the above embodiment, after the state of the target object is determined, the target object may be evaluated according to the state of the target object to determine an evaluation result, and when it is determined that the evaluation result satisfies a predetermined condition, a reminding operation is performed. Wherein the predetermined condition includes lying, fake making, etc.

For example, the interviewee (target object) is slow to walk after entering the interview room, accompanied by nodding and slightly bowing actions, has small loudness of speaking sound in answering questions, is accompanied by sweat beads at the forehead, sits critically in front, and is not full of 1/2 chairs; indicating a greater pressure; the interviewer is leisurely about the stress interview in the interview, does not think too much time and gives correct answers during the speaking period, the eyes only stay for a moment in a certain place, no sign of thinking exists, and no sign of thinking exists in the hand action details. Then it is determined at this time that the interviewer is reciting interview answers rather than actually solving the problem and responding to stress. After determining the interviewer's status, objective conclusions will be given when selecting the HR. I.e., by recording to the interviewer, including video and audio material, among others. Through the fragment selection and analysis of the materials, the fact that when an interviewee answers the resume part, the eyes move irregularly, the face moves slightly, the legs are shaken, sweat beads are on the forehead, the talking is mostly in short words instead of complete sentences, the head is twisted slightly and ceaselessly, and the thinking signs exist; if the interviewer has the above situation only in a certain section, the rest of the time is normal. Then it can be determined from this situation that the interviewer's resume may be suspect of fraud and needs to be verified.

The following describes a method for determining a state of an object in accordance with a specific embodiment:

fig. 9 is a flowchart of a method for determining a status of an object according to an embodiment of the present invention, where the flowchart includes:

step S902, a camera (corresponding to the camera equipment) captures micro expressions, gestures and posture actions; a sound recorder (corresponding to the sound collecting device) records sound;

step S904, integrating data, and determining the time and images of different cameras for capturing the micro-expressions; determining different cameras to capture gestures/posture actions and time stamps; determining the loudness, timbre under normal conditions, tone, speech rate and speech interval of the recording.

Step S906, image segmentation is carried out, the data are respectively collected according to the parameters, and then image segmentation is carried out, and the data are segmented into a segment 1, a segment 2 and a segment 3 and are arranged in parallel according to time; meanwhile, the voice recording is used as a comparison reference.

Step S908, comparing the segments, primarily determining the psychological state, and outputting the psychological state and the overall psychological state in each segment.

In the foregoing embodiments, the determination of the psychology, the interviewee's mental state over a certain period of time and the awareness of which conclusion is trending may be made by image recognition techniques in combination with psychology. And whether the interviewer concluded through thinking or answered through the back can be analyzed through the video clip. The interview psychological state of the candidate is analyzed by integrating an image recognition technology and a multi-sensor analysis technology through real-time all-around video recording and sound recording of the interviewer, and the interview psychological state is analyzed by combining an interview strategy in the HR interview to show the most suitable candidate for the HR. The interviewer who has a smile of knit the brows can be particularly concerned, and the talent most suitable for the company can be selected by combining the problem comparison conclusion of HR. And the method can also provide a reference for the technical support work of the subsequent company recruitment.

Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.

In this embodiment, a device for determining an object state is further provided, where the device is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated for what has been described. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.

Fig. 10 is a block diagram showing the structure of an apparatus for determining the state of an object according to an embodiment of the present invention, as shown in fig. 10, the apparatus including:

a first obtaining module 1002, configured to obtain a video obtained by shooting a target object by a shooting device;

a second obtaining module 1004, configured to obtain a sound obtained by a sound collecting apparatus collecting a sound of the target object;

a determining module 1006 for determining a target state of the target object based on the video and the sound.

In an exemplary embodiment, the determining module 1006 may determine the target state of the target object based on the video and the sound by: segmenting the video according to a preset time period to obtain a plurality of video segments; determining a target sound clip corresponding to each of a plurality of video clips, wherein the sound clip is a sound clip collected within a time period for capturing the video clip; analyzing the video clips and the target sound clips to determine the state of the target object in each time period so as to obtain a plurality of intermediate states; determining a target state of the target object based on a plurality of the intermediate states.

In an exemplary embodiment, the determining module 1006 may analyze the video clip and the target sound clip to determine the intermediate state of the target object in each time period by: determining a target video section included in a plurality of channel video sections, which is photographed at the same time, in a case where the video includes the plurality of channel videos; identifying a first video clip to determine an expression of the target object, wherein the first video clip is a clip included in the target video clip; identifying a second video segment to determine an action of the target object, wherein the second video segment is a segment included in the target video segment; analyzing and processing the target sound to determine the characteristics of the target sound; determining an intermediate state of the target object based on the expression, the action, and the target sound feature.

In an exemplary embodiment, the determining module 1006 may determine the intermediate state of the target object based on the expression, the action, and the target sound feature by: determining a first state of the target object based on the expression; determining a second state of the target object based on the action; determining a third state of the target object based on the target sound feature; determining an intermediate state of the target object based on the first state, the second state, and the third state.

In an exemplary embodiment, the determining module 1006 may determine the intermediate state of the target object based on the first state, the second state, and the third state by: determining each sub-state included in the first state, the second state and the third state respectively; counting the number of each sub-state; and sequencing the sub-states in a descending order of the number, and determining the sub-states with the preset number as the middle state of the target object.

In an exemplary embodiment, the first obtaining module 1002 may obtain a video obtained by shooting a target object by an image capturing apparatus, by: acquiring a first video obtained by shooting a first part of the target object by first camera equipment included in the camera equipment; acquiring a second video obtained by shooting a second part of the target object by second camera equipment included in the camera equipment; determining the first video and the second video as the video.

In one exemplary embodiment, the apparatus may be configured to determine an evaluation result of the target object based on the state of the target object after determining the state of the target object based on the video and the sound; and executing a prompting operation when the evaluation result meets a preset condition.

It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.

Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method as set forth in any of the above.

In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.

Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.

In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.

For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.

It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.

The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:语音情绪识别方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!