Individual gait recognition method based on trinocular visual data
1. An individual gait recognition method based on trinocular vision data is characterized by comprising the following steps:
s1: collecting human body contour information under multiple angles;
s2: extracting gait dynamics characteristics under the human body contour information of each angle to form a gait mode library;
s3: constructing a walking deep learning model under each angle according to the gait dynamics characteristics extracted in the step S2;
s4: acquiring gait dynamics characteristics under multi-angle human body contour information to be identified, and identifying errors based on the gait dynamics characteristics in an existing gait mode library;
s5: and judging the weight value of the feature vector obtained by the gait deep learning model at each angle in the final classification task according to the size of the obtained identification error, so as to realize gait identification.
2. The method for recognizing gait of individual based on trinocular visual data as claimed in claim 1, wherein said step S1 is performed by using three cameras at an angle to each other to capture contour images of human body at three angles.
3. The individual gait recognition method based on the trinocular visual data according to claim 2, wherein the gait dynamics characteristics under the human body contour information of each angle extracted in the step S2 are specifically:
and respectively carrying out background subtraction and morphological processing on the human body contour image sequence under the three cameras to obtain a binary gait contour map of human body walking under the three cameras, extracting human body movement area characteristic parameters to form a nonlinear gait dynamics characteristic matrix, wherein the human body movement area characteristic parameters comprise a human body movement contour total area A1, a human body lower limb contour total area A2 and a two tiptoe connecting line ground triangle area A3.
4. The individual gait recognition method based on trinocular visual data according to claim 3, characterized in that a gait pattern library is formed in the step S2, specifically:
and establishing a nonlinear dynamics model for the extracted human body movement area characteristic parameters, establishing a neural network for extracting nonlinear gait dynamics, and storing the obtained result in a gait pattern library in a form of a constant neural network representing a gait pattern.
5. The individual gait recognition method according to claim 4, characterized in that the step S3 is to construct a walking deep learning model at each angle, specifically:
inputting nonlinear gait dynamics characteristic matrixes obtained under three cameras into a deep neural network model, performing secondary characteristic extraction and model training, storing gait characteristic values F1, F2 and F3 after the secondary characteristic extraction in a gait mode library, and forming gait recognition deep learning models M1, M2 and M3 relative to the three cameras by the trained deep neural network model.
6. The method for identifying individual gait according to claim 5, wherein the step S4 of identifying the gait dynamics characteristics based on the gait dynamics characteristics with the existing gait pattern library specifically comprises:
and respectively constructing a dynamic estimator by using the constant neural network in the three camera views, calculating the difference between the gait dynamics characteristics obtained by extraction and the estimated value obtained by the dynamic estimator, forming an identification error epsilon, and extracting the minimum errors epsilon 1, epsilon 2 and epsilon 3 obtained under the three cameras.
7. The method for identifying gait of individual based on trinocular vision data as claimed in claim 6, wherein the dynamic estimator is constructed by:
where k is 1.., M denotes the kth estimator,representing the state of the dynamic estimator, x is the state variable of some input test pattern, B is a constant empirical value parameter,represents a constant neural network, whereinRepresents the weight matrix of the neural network, and S (x) represents the radial basis functions used.
8. The method for identifying individual gait based on trinocular vision data as claimed in claim 7, wherein the difference between the gait dynamics characteristics obtained by the calculation and the estimated value obtained by the dynamic estimator forms an identification error ε, which is specifically:
in the formula (I), the compound is shown in the specification,is the error in the estimation of the state,representing the state of the dynamic estimator, xiState variables representing a certain input test pattern, biRepresenting a constant empirical value parameter;
represents a constant neural network, whereinRepresenting a weight matrix of the neural network, Si(x) Represents the radial direction usedBasis function phiiAnd (x, p) represents the nonlinear gait dynamics characteristics of the test mode, subscript i is 1,2 and 3, the gait dynamics characteristics under the human body movement area characteristic parameters obtained under three cameras are respectively indicated, p represents the system parameters, and different p values represent different individuals corresponding to different gait dynamics systems.
9. The method for identifying gait of individual based on trinocular vision data as claimed in claim 8, wherein the minimum errors ε 1, ε 2, ε 3 obtained under three cameras are extracted, specifically:
in the formula, TcRepresents the gait cycle time value and t represents the current time value.
10. The individual gait recognition method according to claim 9, wherein the step S5 judges the weight value of the feature vector learned by the gait deep learning model at each angle in the final classification task according to the magnitude of the obtained recognition error, so as to realize gait recognition, specifically:
comparing the magnitude relation of the minimum error, and selecting different weight distribution schemes according to the magnitude relation, wherein weights in the different weight distribution schemes are different;
and according to the distributed weight, carrying out numerical value fusion on the feature data after secondary feature learning in a weighted average mode to obtain a spliced gait feature numerical value, and inputting the spliced gait feature numerical value into a seven-layer fully-connected network for classification to complete gait recognition.
Background
Although many gait recognition algorithms emerge at present, most of the work depends on a certain specific observation visual angle, only gait characteristics under a single observation visual angle are collected and analyzed, and therefore the recognition rate is not high under the conditions of different clothes, different field backgrounds, different bag carrying types, different shoe wearing types and different pace speeds. Gait characteristics collected under a single observation visual angle are relatively limited, and internal information in the walking process cannot be completely and deeply described, so that a gait recognition system with strong robustness and strong anti-interference capability cannot be constructed.
The chinese patent publication No. CN111814624A, whose publication date is 10 and 23 in 2020, discloses a method for identifying and training gait of pedestrians in video, a method for identifying and training gait, and a storage device, wherein the method for identifying and training gait of pedestrians comprises: detecting a pedestrian picture in a video and extracting a pedestrian outline picture; inputting the pedestrian contour map into a neural convolution network, and processing the pedestrian contour map by adopting a space attention mechanism and a frame attention mechanism in sequence to obtain a characteristic map; carrying out blocking processing on the feature map on a frame dimension, and calculating the triple loss of each blocking feature map; and optimizing the triple loss until convergence to obtain a pedestrian gait recognition result. The patent adopts a single observation visual angle, and cannot completely and deeply depict the internal information in the walking process.
Disclosure of Invention
The invention provides an individual gait recognition method based on trinocular visual data, which integrates gait information under multiple visual angles and improves the robustness of a gait recognition technology.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an individual gait recognition method based on trinocular vision data comprises the following steps:
s1: collecting human body contour information under multiple angles;
s2: extracting gait dynamics characteristics under the human body contour information of each angle to form a gait mode library;
s3: constructing a walking deep learning model under each angle according to the gait dynamics characteristics extracted in the step S2;
s4: acquiring gait dynamics characteristics under multi-angle human body contour information to be identified, and identifying errors based on the gait dynamics characteristics in an existing gait mode library;
s5: and judging the weight value of the feature vector obtained by the gait deep learning model at each angle in the final classification task according to the size of the obtained identification error, so as to realize gait identification.
Preferably, in step S1, three cameras at an angle are used to capture the contour image of the human body at three angles.
Preferably, the step S2 of extracting the gait dynamics characteristics under the human body contour information at each angle specifically includes:
and respectively carrying out background subtraction and morphological processing on the human body contour image sequence under the three cameras to obtain a binary gait contour map of human body walking under the three cameras, extracting human body movement area characteristic parameters to form a nonlinear gait dynamics characteristic matrix, wherein the human body movement area characteristic parameters comprise a human body movement contour total area A1, a human body lower limb contour total area A2 and a two tiptoe connecting line ground triangle area A3.
Preferably, the step S2 forms a gait pattern library, specifically:
and establishing a nonlinear dynamics model for the extracted human body movement area characteristic parameters, establishing a neural network for extracting nonlinear gait dynamics, and storing the obtained result in a gait pattern library in a form of a constant neural network representing a gait pattern.
Preferably, in step S3, the step S3 is to construct a walking deep learning model at each angle, specifically:
inputting nonlinear gait dynamics characteristic matrixes obtained under three cameras into a deep neural network model, performing secondary characteristic extraction and model training, storing gait characteristic values F1, F2 and F3 after the secondary characteristic extraction in a gait mode library, and forming gait recognition deep learning models M1, M2 and M3 relative to the three cameras by the trained deep neural network model.
Preferably, the identification error based on the gait dynamics characteristics in step S4 and the existing gait pattern library is specifically:
and respectively constructing a dynamic estimator by using the constant neural network in the three camera views, calculating the difference between the gait dynamics characteristics obtained by extraction and the estimated value obtained by the dynamic estimator, forming an identification error epsilon, and extracting the minimum errors epsilon 1, epsilon 2 and epsilon 3 obtained under the three cameras.
Preferably, the building dynamic estimator specifically comprises:
where k is 1.., M denotes the kth estimator,representing the state of the dynamic estimator, x is the state variable of some input test pattern, and B is a constant empirical parameter (values [ -10, -25 [)]In between).Represents a constant neural network, whereinRepresents the weight matrix of the neural network, and S (x) represents the radial basis functions used.
Preferably, the calculating a difference between the extracted gait dynamics characteristic and an estimated value obtained by the dynamic estimator forms an identification error ∈, which specifically is:
in the formula (I), the compound is shown in the specification,is the error in the estimation of the state,representing the state of the dynamic estimator, xiState variables representing a certain input test pattern, biRepresenting a constant empirical value parameter (having values of-10, -25)]In between).Represents a constant neural network, whereinRepresenting a weight matrix of the neural network, Si(x) Representing the radial basis function used, phiiAnd (x, p) represents the nonlinear gait dynamics characteristics of the test mode, subscript i is 1,2 and 3, the gait dynamics characteristics under the human body movement area characteristic parameters obtained under three cameras are respectively indicated, p represents the system parameters, and different p values represent different individuals corresponding to different gait dynamics systems.
Preferably, the extracting minimum errors epsilon 1, epsilon 2, and epsilon 3 obtained under the three cameras is specifically:
in the formula, TcRepresents the gait cycle time value and t represents the current time value.
Preferably, in step S5, the weight value of the feature vector learned by the gait deep learning model at each angle in the final classification task is determined according to the size of the obtained identification error, so as to implement gait identification, specifically:
comparing the magnitude relation of the minimum error, and selecting different weight distribution schemes according to the magnitude relation, wherein weights in the different weight distribution schemes are different;
and according to the distributed weight, carrying out numerical value fusion on the feature data after secondary feature learning in a weighted average mode to obtain a spliced gait feature numerical value, and inputting the spliced gait feature numerical value into a seven-layer fully-connected network for classification to complete gait recognition.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
compared with the traditional gait recognition method, the invention provides a trinocular vision data fusion strategy, which fuses gait information under a plurality of different observation visual angles and extracts comprehensive characteristics. The comprehensive characteristics comprise more comprehensive gait information under a plurality of observation visual angles, so that the comprehensive characteristics are more suitable for the influence of complicated and changeable internal and external factors in practical application and have stronger practicability and operability.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a diagram illustrating characteristic parameters of a human motion area in an embodiment.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides an individual gait recognition method based on trinocular vision data, as shown in fig. 1, comprising the following steps:
s1: collecting human body contour information under multiple angles;
s2: extracting gait dynamics characteristics under the human body contour information of each angle to form a gait mode library;
s3: constructing a walking deep learning model under each angle according to the gait dynamics characteristics extracted in the step S2;
s4: acquiring gait dynamics characteristics under multi-angle human body contour information to be identified, and identifying errors based on the gait dynamics characteristics in an existing gait mode library;
s5: and judging the weight value of the feature vector obtained by the gait deep learning model at each angle in the final classification task according to the size of the obtained identification error, so as to realize gait identification.
In step S1, three cameras at an angle are used to capture the contour images of the human body at three angles.
The step S2 of extracting gait dynamics characteristics under the human body contour information at each angle specifically includes:
the method comprises the steps of respectively carrying out background subtraction and morphological processing on a human body contour image sequence under three cameras to obtain a binary gait contour map of human body walking under the three cameras, extracting human body movement area characteristic parameters to form a nonlinear gait dynamics characteristic matrix, wherein the human body movement area characteristic parameters comprise a human body movement contour total area A1, a human body lower limb contour total area A2 and a two-tiptoe connecting line ground triangle area A3 as shown in figure 2.
A gait pattern library is formed in the step S2, and specifically includes:
establishing a nonlinear dynamical model for the extracted characteristic parameters of the human motion areaWherein x is [ A1, A2, A3]Representing three area characteristic parameters, constructing a neural network for extracting nonlinear gait dynamics, using an RBF neural network in the embodiment, and obtaining a result in the form of a constant neural network [ W1S1(x),W2S2(x),W3S3(x)]A representative gait pattern is stored in a gait pattern library, and S (.) represents a radial basis function in the RBF neural network used.
In step S3, constructing a walking deep learning model at each angle, specifically:
inputting nonlinear gait dynamics characteristic matrixes obtained under three cameras into a deep neural network model, performing secondary characteristic extraction and model training, storing gait characteristic values F1, F2 and F3 after the secondary characteristic extraction in a gait mode library, and forming gait recognition deep learning models M1, M2 and M3 relative to the three cameras by the trained deep neural network model.
The identification error based on the gait dynamics characteristics in the step S4 and the existing gait pattern library is specifically as follows:
and respectively constructing a dynamic estimator by using the constant neural network in the three camera views, calculating the difference between the gait dynamics characteristics obtained by extraction and the estimated value obtained by the dynamic estimator, forming an identification error epsilon, and extracting the minimum errors epsilon 1, epsilon 2 and epsilon 3 obtained under the three cameras.
The building dynamic estimator specifically comprises:
where k is 1.., M denotes the kth estimator,representing the state of the dynamic estimator, x is the state variable of some input test pattern, and B is a constant empirical parameter (values [ -10, -25 [)]In between).Represents a constant neural network, whereinRepresents the weight matrix of the neural network, and S (x) represents the radial basis functions used.
The difference between the gait dynamics characteristics obtained by calculation and extraction and the estimation value obtained by the dynamic estimator forms an identification error epsilon, and specifically comprises the following steps:
in the formula (I), the compound is shown in the specification,is the error in the estimation of the state,representing the state of the dynamic estimator, xiState variables representing a certain input test pattern, biRepresenting a constant empirical value parameter (having values of-10, -25)]In between).Represents a constant neural network, whereinRepresenting a weight matrix of the neural network, Si(x) Representing the radial basis function used, phiiAnd (x, p) represents the nonlinear gait dynamics characteristics of the test mode, subscript i is 1,2 and 3, the gait dynamics characteristics under the human body movement area characteristic parameters obtained under three cameras are respectively indicated, p represents the system parameters, and different p values represent different individuals corresponding to different gait dynamics systems.
The extraction of the minimum errors epsilon 1, epsilon 2 and epsilon 3 obtained under the three cameras is specifically as follows:
in the formula, TcRepresents the gait cycle time value and t represents the current time value.
In step S5, the weight value of the feature vector learned by the gait deep learning model at each angle in the final classification task is determined according to the magnitude of the obtained identification error, so as to implement gait identification, specifically:
comparing the magnitude relation of the minimum error, and selecting different weight distribution schemes according to the magnitude relation, wherein weights in the different weight distribution schemes are different;
and according to the distributed weight, carrying out numerical value fusion on the feature data after secondary feature learning in a weighted average mode to obtain a spliced gait feature numerical value, and inputting the spliced gait feature numerical value into a seven-layer fully-connected network for classification to complete gait recognition.
In the specific implementation process, a CASIA-B gait database is adopted, the database comprises 124 persons, each person has 11 gait sequences (0 degrees, 18 degrees, 36 degrees, … degrees and 180 degrees) of visual angles, and 6 normal walking gait sequences, 2 overcoat walking sequences and 2 bag carrying sequences exist under each visual angle. In this embodiment, gait sequences of three visual angles of 54 °, 90 °, and 126 ° are selected as a training set and a test set, a walking video of each person is extracted, background subtraction is sequentially performed, and a gait image with separated background is subjected to morphological processing to obtain a binarized human body contour map. In the embodiment, four normal walking sequences, one coat walking sequence and one bag sequence under three visual angles are used as training modes, and the other sequences are used as test modes.
For the calculated recognition error value, when the condition that epsilon 1 is more than epsilon 2 and more than epsilon 3 is met, calling a weight value distribution scheme K1; identifying that the error value meets epsilon 1> epsilon 3> epsilon 2, and calling a weight value distribution scheme K2; identifying that the error value meets epsilon 2> epsilon 1> epsilon 3, and calling a weight value distribution scheme K3; identifying that the error value meets epsilon 2> epsilon 3> epsilon 1, and calling a weight value distribution scheme K4; identifying that the error value meets epsilon 3> epsilon 1> epsilon 2, and calling a weight value distribution scheme K5; identifying that the error value meets epsilon 3> epsilon 2> epsilon 1, and calling a weight value distribution scheme K6; identifying that the error value meets epsilon 1> > epsilon 2+ epsilon 3, and calling a weight value distribution scheme K7; identifying that the error value meets epsilon 2> > epsilon 1+ epsilon 3, and calling a weight value distribution scheme K8; identifying that the error value meets epsilon 3> > epsilon 1+ epsilon 2, and calling a weight value distribution scheme K9;
in an embodiment of the present invention, the weight assignment scheme K1 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q1 ═ a1 × F1 eigenvalue + a2 × F2 eigenvalue + a3 × F3 eigenvalue. Wherein, a1, a2 and a3 are weights, and Q1 is a jointed gait characteristic value. And inputting the Q1 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, said a1 has a value of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, said a2 has a value of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, said a3 has a value of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In an embodiment of the present invention, the weight assignment scheme K2 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q2 ═ b1 × F1 eigenvalue + b2 × F2 eigenvalue + b3 × F3 eigenvalue. Wherein, b1, b2 and b3 are weights, and Q2 is a jointed gait characteristic value. And inputting the Q2 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, said b1 has a value of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, said b2 has a value of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, said b3 has a value of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In an embodiment of the present invention, the weight assignment scheme K3 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q3 ═ c1 × F1 eigenvalues + c2 × F2 eigenvalues + c3 × F3 eigenvalues. Wherein c1, c2 and c3 are weights, and Q3 is a jointed gait characteristic value. And inputting the Q3 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, the value of c1 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of c2 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of c3 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In an embodiment of the present invention, the weight assignment scheme K4 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q4 ═ d1 × F1 eigenvalues + d2 × F2 eigenvalues + d3 × F3 eigenvalues. Wherein d1, d2 and d3 are weights, and Q4 is a jointed gait characteristic value. And inputting the Q4 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, the value of d1 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of d2 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of d3 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In an embodiment of the present invention, the weight assignment scheme K5 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q5 ═ e1 × F1 eigenvalues + e2 × F2 eigenvalues + e3 × F3 eigenvalues. Wherein e1, e2 and e3 are weights, and Q5 is a jointed gait characteristic value. And inputting the Q5 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, the value of e1 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of e2 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of e3 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In an embodiment of the present invention, the weight assignment scheme K6 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q6 ═ F1 × F1 eigenvalues + F2 × F2 eigenvalues + F3 × F3 eigenvalues. Wherein f1, f2 and f3 are weights, and Q6 is a jointed gait characteristic value. And inputting the Q6 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, the value of f1 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of f2 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of f3 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In an embodiment of the present invention, the weight assignment scheme K7 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q7 ═ g1 × F1 eigenvalues + g2 × F2 eigenvalues + g3 × F3 eigenvalues. Wherein g1, g2 and g3 are weights, and Q7 is a jointed gait characteristic value. And inputting the Q7 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, the value of g1 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of g2 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of g3 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In an embodiment of the present invention, the weight assignment scheme K8 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q8 ═ h1 × F1 eigenvalue + h2 × F2 eigenvalue + h3 × F3 eigenvalue. Wherein h1, h2 and h3 are weights, and Q8 is a jointed gait characteristic value. And inputting the Q8 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, the value of h1 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of h2 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of h3 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In an embodiment of the present invention, the weight assignment scheme K9 specifically refers to numerical fusion of feature data obtained after secondary feature learning in a weighted average manner: q9 ═ i1 × F1 eigenvalues + i2 × F2 eigenvalues + i3 × F3 eigenvalues. Wherein, i1, i2 and i3 are weights, and Q9 is a jointed gait characteristic value. And inputting the Q9 into a seven-layer fully-connected network for classification, thereby realizing the gait recognition task.
Preferably, the value of i1 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of i2 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
Preferably, the value of i3 is 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 or 1.0.
In one embodiment of the invention, in the identification experiment in the CAKIA-B database, the identification efficiency can reach 97% at most under the normal walking condition, the identification rate can reach 80% at most under the bag walking condition, and the identification rate can reach 79% at most under the overcoat wearing condition. It can be seen that even under different bag conditions or overcoat wearing conditions, the walking individuals have the recognition rate within the acceptable range and do not fail.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.