Micro-expression compression method based on three-branch decision and optical flow filtering mechanism

文档序号:8357 发布日期:2021-09-17 浏览:42次 中文

1. A micro-expression compression method based on three-branch decision and optical flow filtering mechanism is characterized in that: the method comprises the following steps:

s1, selecting the microexpression data set A ═ V1,V2,V3,…VtPerforming image completion, size unification and image graying pretreatment;

s2-use MTCNN multitask cascade neural network pair V1,V2,V3,…VtPositioning and cutting a face area of the video clip picture, and unifying the size of the picture;

s3, for each video Vi={v1,v2,…,vtV for every two consecutive video segmentsiAnd vi+1All have oiGenerating, video ViConversion to optical flow set Oi={o1,o2,…,ot-1};

S4 for Oi={o1,o2,…,ot-1Acquiring the lateral displacement of each optical flowAnd longitudinal displacementCalculating the intensity of each optical flow by the following expression, wherein W represents a horizontal pixel, and H represents a vertical pixel size;

s5 for current optical flow oiObtaining average pixel intensity under current optical flowThe expression is as follows:

s6 applying the weighted function to each optical flow oiAnd carrying out weighted assignment, wherein the assignment expression is as follows:

s7 repeating S3-S6 to perform optical flow processing weighting on each video clip set, and obtaining each video set ViCorresponding optical flow weighting set omegai

S8 defining three threshold values (alpha, beta) by comparing omegaiThe set is subjected to branch screening, and the rule is defined as:

when in useTime, light flow oiDivision into NEG(α,β)(O) a set of domains;

when in useTime, light flow oiDivision into BND(α,β)(O) a set of domains;

when in useTime, light flow oiDivided into POS(α,β)(O) a set of domains;

s9, acquiring a video clip set according to the optical flow branching rule, wherein the rule is defined as:

when o isi∈POS(α,β)(O) time, video clip vi+1Is defined as vi+1∈POS(α,β)(V);

When o isi∈BND(α,β)(O) time, video clip vi+1Is defined as vi+1∈BND(α,β)(V);

When o isi∈NEGα(O) time, video clip vi+1Is defined as vi+1∈NEG(α,β)(V);

S10 collecting BND according to light flow(α,β)(V)∪POS(α,β)(V) updating the video segment set, and performing video segment reordering integration based on the time sequence to obtain a new video segment sequence set V;

s11: defining a convergence coefficient eta, and repeating the steps S3-S10 for the times of reaching the convergence coefficient or NEG(α,β)(O) the domain set data remains 0 in the self-growth threshold state, stopping the iteration;

s12: obtaining a high-quality semantic video clip set, and obtaining a high-quality video clip set V;

s13, extracting the texture feature based on the dynamic video of the video V based on the optical flow filtration to obtain the video feature values on three planes of a space plane XY and space-time planes XT, YT, and the expression is shown as follows:

and S14, training the video features acquired in the S13 by adopting a classifier, and acquiring a micro expression recognition model for final recognition of micro expressions.

Background

A micro-expression is a subtle, involuntary facial expression that is usually affected by complex environmental, human, etc. factors and is produced in an involuntary situation. Micro-expressions are imperceptible in general, compared to macro-expressions, and in fact, due to the physical characteristics of humans, such involuntary facial expressions are represented by an extremely rapid and subtle facial movement. The exfiltration of micro-expressions expresses the real emotion that people suppress and try to hide, and the current micro-expressions are mainly composed of several basic emotions, including happiness, anger, disgust, fear, surprise and others. Micro-expressions are a responsive action ascribed to physiology and thus reveal a real psychological state that is not controllable.

Through the analysis of the micro expression related data, the video contains a large number of images with lower semantic information, and the images are not expressed and have little morphological and semantic change. According to the analysis of the micro-expression video data, the area where the micro-expression occurs is mainly concentrated from the starting frame to the offset frame, and the peak of the semantics is reached in the top frame. Therefore, unprocessed data contains a large amount of low quality and unbalanced data.

Disclosure of Invention

The invention provides a micro expression compression method based on three-branch decision and an optical flow filtering mechanism, which defines a weighting function by using an optical flow attribute and provides a micro expression research method with redundancy removal and video compression functions based on a rough set probability decision method.

The invention is realized by the following technical scheme:

a micro-expression compression method based on three-branch decision and optical flow filtering mechanism comprises the following steps:

s1, selecting the microexpression data set A ═ V1,V2,V3,…VtPreprocessing such as image completion, size unification, image graying and the like;

s2, adopting MTCNN multitask cascade neural network pair V1,V2,V3,…VtPositioning and cutting a face area of the video clip picture, and unifying the size of the picture;

s3, for each video Vi={v1,v2,…,vtFor every two consecutive video segments viAnd vi+1All have oiGenerating, video ViConversion to optical flow set Oi={o1,o2,…,ot-1};

S4 for Oi={o1,o2,…,ot-1Acquiring the lateral displacement of each optical flowAnd longitudinal displacementCalculating the intensity of each optical flow by the following expression, wherein W represents a horizontal pixel, and H represents a vertical pixel size;

s5 for current optical flow oiObtaining average pixel intensity under current optical flowThe expression is as follows:

s6 applying the weighted function to each optical flow oiAnd carrying out weighted assignment, wherein the assignment expression is as follows:

s7 repeating S3-S6 to perform optical flow processing weighting on each video clip set, and obtaining each video set ViCorresponding optical flow weighting set omegai

S8 defining three threshold values (alpha, beta) by comparing omegaiThe set is subjected to branch screening, and the rule is defined as:

when in useTime, light flow oiDivision into NEG(α,β)(O) a set of domains;

when in useTime, light flow oiDivision into BND(α,β)(O) a set of domains;

when in useTime, light flow oiDivided into POS(α,β)(O) a set of domains;

s9, acquiring a video clip set according to the optical flow branching rule, wherein the rule is defined as:

when o isi∈POS(α,β)(O) time, video clip vi+1Is defined as vi+1∈POS(α,β)(V);

When o isi∈BND(α,β)(O) time, video clip vi+1Is defined as vi+1∈BND(α,β)(V);

When o isi∈NEGα(O) time, video clip vi+1Is defined as vi+1∈NEG(α,β)(V);

S10 collecting BND according to light flow(α,β)(V)∪POS(α,β)(V) updating the video segment set, and performing video segment reordering integration based on the time sequence to obtain a new video segment sequence set V;

s11: defining a convergence coefficient eta, and repeating the steps S3-S10 for the number of times to reach the convergence coefficient or NEG(α,β)(O) the domain set data remains 0 in the self-growth threshold state, stopping the iteration;

s12: obtaining a high-quality semantic video clip set, and obtaining a high-quality video clip set V;

s13, extracting the texture feature based on the dynamic video of the video V based on the optical flow filtration to obtain the video feature values on three planes of a space plane XY and space-time planes XT, YT, and the expression is shown as follows:

and S14, training the video features acquired in the S13 by adopting a classifier, and acquiring a micro expression recognition model for final recognition of micro expressions.

Compared with the prior art, the invention has the following advantages:

1. the invention introduces the rough set probability decision into the field of micro expression, and carries out information decision through the rough set, thereby expanding the new direction of micro expression research.

2. According to the method, the redundancy of the picture information is removed according to the optical flow change weight, the video clip information is effectively compressed, and the semantic expression among the information is improved.

Drawings

FIG. 1 is a basic flow diagram of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. The embodiments described herein are only for explaining the technical solution of the present invention and are not limited to the present invention.

The invention will be further explained by the following embodiments, as shown in the basic flow chart of the micro-expression compression method based on the three-branch decision and optical flow filtering mechanism shown in fig. 1.

1. Taking a micro-expression CASMIEII data set as experimental data, wherein the experimental data comprises 26 participants, and 256 micro-expression video files V ═ V { (V)1,V2,V3,…V256And 5, the micro-emotions labels comprise happy, disgusting, fear, sadness and other 5 kinds of emotions labels. And defining an initialization iteration threshold eta of 5, a statistical number S of 0, and a threshold (alpha, beta) of (0.35, 0.6).

2. For video files V1={v1,v2,…,v290Anda unit configured to obtain an optical flow set O ═ O according to an optical flow extraction rule, the optical flow set being composed of 290 video frame pictures1,o2,…,o289289 optical flows are used for expressing semantic change relations among pictures.

3. Using the optical flow weighting function pair defined by S4-S6, O ═ { O ═ O1,o2,…,o289Carrying out weight calculation to obtain an optical flow weight set

4. For theThe optical flow filtering is performed according to the threshold value (α, β) ═ 0.35, 0.6.

5. Traverse ω (O), according to step S8, whenTime, light flow oiDivision into NEG(α,β)(O) a set of domains, wherein,time division into optical flows oiDivision into BND(α,β)(O) sets of fields, whereas the optical flow OiDivided into POS(α,β)(O)。

6. According to step S9, the video frames are classified into POS according to optical flow classification(α,β)(V),BND(α,β)(V) and NEG(α,β)(V)。

7. And (5) repeating the steps 2-6, wherein the iteration threshold alpha is increased by 0.02 and S +1 every time the iteration threshold alpha is finished, and stopping until the iteration times S is more than or equal to eta for the statistical times.

8. Merging POS(α,β)(V) and BND(α,β)(V) and reordering to complete the video V1Compressed into V1′={v1′,v2′,…,v′207In which v is1′,v′2,…,v′207The reordered pictures are numbered.

9. And similarly, filtering other videos of the video V according to the steps 2-8 to obtain a new compressed video file V' ═ V1′,V2′,V3′,…V′256}。

10. Extraction of V' feature Hα,βThe method is used for classification and identification, and the effect of obtaining the identification rate is about 51%.

The foregoing merely represents preferred embodiments of the invention, which are described in some detail and detail, and therefore should not be construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes, modifications and substitutions can be made without departing from the spirit of the present invention, and these are all within the scope of the present invention. Therefore, the protection scope of the present patent should be subject to the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于深度学习的行人重识别方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!