Emotion recognition method and device, computer readable storage medium and equipment

文档序号:8505 发布日期:2021-09-17 浏览:31次 中文

1. A method of emotion recognition, comprising:

acquiring voice characteristics, expression characteristics and biological characteristics of a target object;

performing emotion recognition on the voice features through a first emotion recognition model which is trained in advance to obtain a first emotion recognition result;

performing emotion recognition on the expression characteristics through a pre-trained second emotion recognition model to obtain a second emotion recognition result;

performing emotion recognition on the biological characteristics through a third emotion recognition model which is trained in advance to obtain a third emotion recognition result;

fusing the first emotion recognition result, the first emotion recognition result and the third emotion recognition result by using a pre-constructed fusion model to obtain a fusion value;

and recognizing the emotion of the target object according to the fusion value.

2. The emotion recognition method of claim 1, wherein the speech features include action features and prosody features.

3. The emotion recognition method according to claim 2, further comprising: and enhancing the prosodic features.

4. The emotion recognition method of claim 3, wherein the enhancing the prosodic features comprises:

forming an input prosody characteristic sequence by using the prosody characteristic of the nth frame and the prosody characteristics of the adjacent frames taking the prosody characteristic of the nth frame as the center based on the prosody characteristics of the target object;

coding the input prosody characteristic sequence to obtain a coding characteristic sequence;

and decoding the coding characteristic sequence to obtain an enhanced prosody characteristic sequence corresponding to the input prosody characteristic sequence.

5. The emotion recognition method of claim 4, wherein the sequence of input prosodic features is encoded by applying a multi-headed self-attention operation on the adjacent multi-frame prosodic features.

6. The emotion recognition method of claim 2, further comprising, before performing emotion recognition on the speech feature by a first emotion recognition model trained in advance: and performing feature fusion on the action features and the prosody features to obtain fusion features.

7. The emotion recognition method of claim 1, wherein obtaining the expressive features of the target object comprises:

acquiring a face picture;

extracting single expression features from the face picture through a first neural network;

performing multi-scale extraction on the single expression features through a second neural network to obtain attention features of the single expression features under different scales;

and fusing the attention characteristics of the single expression characteristics under different scales to obtain the expression characteristics.

8. An emotion recognition apparatus, comprising:

the characteristic acquisition module is used for acquiring the voice characteristic, the expression characteristic and the biological characteristic of the target object;

the first initial emotion recognition module is used for carrying out emotion recognition on the voice features through a first emotion recognition model which is trained in advance to obtain a first emotion recognition result;

the second initial emotion recognition module is used for carrying out emotion recognition on the expression characteristics through a second emotion recognition model which is trained in advance to obtain a second emotion recognition result;

the third initial emotion recognition module is used for carrying out emotion recognition on the biological characteristics through a third emotion recognition model which is trained in advance to obtain a third emotion recognition result;

the fusion module is used for fusing the first emotion recognition result, the first emotion recognition result and the third emotion recognition result by using a pre-constructed fusion model to obtain a fusion value;

and the emotion recognition module is used for recognizing the emotion of the target object according to the fusion value.

9. An emotion recognition device comprising a processor coupled to a memory, the memory storing program instructions which, when executed by the processor, implement the method of any of claims 1 to 7.

10. A computer-readable storage medium, characterized by comprising a program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7.

Background

With the rapid development of computer technology, artificial intelligence technology and related disciplines, the automation degree of the whole society is continuously improved, and the demand of people for human-computer interaction similar to a human-human communication mode is increasingly strong. Facial expressions are the most direct and effective mode of emotion recognition. It has many applications in human-computer interaction. If the computer and the robot can understand and express the emotion like a human, the relationship between the human and the computer is fundamentally changed, so that the computer can better serve the human. Emotional recognition is the basis of emotional understanding, is a precondition for solving human emotion through a computing mechanism, and is an effective way for people to explore and understand intelligence.

The prior art includes a plurality of emotion extraction methods, and the emotion of a user is recognized by acquiring facial or voice information. However, emotion recognition using a single feature is less accurate. Thus, the prior art suffers from a number of deficiencies and needs to be improved.

Disclosure of Invention

In view of the above-mentioned shortcomings of the prior art, it is an object of the present invention to provide a method, apparatus, computer-readable storage medium and device for emotion recognition, which solve the problems of the prior art.

To achieve the above and other related objects, the present invention provides a method for emotion recognition, comprising:

acquiring voice characteristics, expression characteristics and biological characteristics of a target object;

performing emotion recognition on the voice features through a first emotion recognition model which is trained in advance to obtain a first emotion recognition result;

performing emotion recognition on the expression characteristics through a pre-trained second emotion recognition model to obtain a second emotion recognition result;

performing emotion recognition on the biological characteristics through a third emotion recognition model which is trained in advance to obtain a third emotion recognition result;

fusing the first emotion recognition result, the first emotion recognition result and the third emotion recognition result by using a pre-constructed fusion model to obtain a fusion value;

and recognizing the emotion of the target object according to the fusion value.

Optionally, the speech features include action features and prosody features.

Optionally, the method further comprises: and enhancing the prosodic features.

Optionally, the enhancing the prosodic features includes:

forming an input prosody characteristic sequence by using the prosody characteristic of the nth frame and the prosody characteristics of the adjacent frames taking the prosody characteristic of the nth frame as the center based on the prosody characteristics of the target object;

coding the input prosody characteristic sequence to obtain a coding characteristic sequence;

and decoding the coding characteristic sequence to obtain an enhanced prosody characteristic sequence corresponding to the input prosody characteristic sequence.

Optionally, the method further comprises the step of applying a multi-head self-attention operation on the adjacent multi-frame prosodic features when encoding the input sequence of prosodic features.

Optionally, before performing emotion recognition on the speech feature through the first emotion recognition model trained in advance, the method further includes: and performing feature fusion on the action features and the prosody features to obtain fusion features.

Optionally, the obtaining of the expression feature of the target object includes:

acquiring a face picture;

extracting single expression features from the face picture through a first neural network;

performing multi-scale extraction on the single expression features through a second neural network to obtain attention features of the single expression features under different scales;

and fusing the attention characteristics of the single expression characteristics under different scales to obtain the expression characteristics.

To achieve the above and other related objects, the present invention provides an emotion recognition apparatus, comprising:

the characteristic acquisition module is used for acquiring the voice characteristic, the expression characteristic and the biological characteristic of the target object;

the first initial emotion recognition module is used for carrying out emotion recognition on the voice features through a first emotion recognition model which is trained in advance to obtain a first emotion recognition result;

the second initial emotion recognition module is used for carrying out emotion recognition on the expression characteristics through a second emotion recognition model which is trained in advance to obtain a second emotion recognition result;

the third initial emotion recognition module is used for carrying out emotion recognition on the biological characteristics through a third emotion recognition model which is trained in advance to obtain a third emotion recognition result;

the fusion module is used for fusing the first emotion recognition result, the first emotion recognition result and the third emotion recognition result by using a pre-constructed fusion model to obtain a fusion value;

and the emotion recognition module is used for recognizing the emotion of the target object according to the fusion value.

To achieve the above and other related objects, the present invention provides an emotion recognition apparatus, comprising a processor coupled to a memory, the memory storing program instructions, the method being implemented when the program instructions stored in the memory are executed by the processor.

To achieve the above and other related objects, the present invention provides a computer-readable storage medium including a program which, when run on a computer, causes the computer to execute the method.

As described above, the emotion recognition method, device, computer-readable storage medium and apparatus provided by the present invention have the following beneficial effects:

the emotion recognition method comprises the following steps: acquiring voice characteristics, expression characteristics and biological characteristics of a target object; performing emotion recognition on the voice features through a first emotion recognition model which is trained in advance to obtain a first emotion recognition result; performing emotion recognition on the expression characteristics through a pre-trained second emotion recognition model to obtain a second emotion recognition result; performing emotion recognition on the biological characteristics through a third emotion recognition model which is trained in advance to obtain a third emotion recognition result; fusing the first emotion recognition result, the first emotion recognition result and the third emotion recognition result by using a pre-constructed fusion model to obtain a fusion value; and recognizing the emotion of the target object according to the fusion value. The invention makes up the defect of single characteristic by fusing the voice characteristic, the expression characteristic and the biological characteristic.

Drawings

FIG. 1 is a flow chart of a method of emotion recognition in an embodiment of the present invention;

FIG. 2 is a flow chart of a method for enhancing prosodic features in an embodiment of the invention;

FIG. 3 is a schematic structural diagram of an encoder according to an embodiment of the present invention;

fig. 4 is a flowchart of a method for obtaining an expression feature of the target object according to an embodiment of the present invention;

fig. 5 is a schematic structural diagram of an emotion recognition apparatus according to an embodiment of the present invention.

Detailed Description

The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.

It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.

As shown in fig. 1, an embodiment of the present application provides an emotion recognition method, including:

s11, acquiring voice characteristics, expression characteristics and biological characteristics of the target object;

s12, performing emotion recognition on the voice features through a first emotion recognition model trained in advance to obtain a first emotion recognition result;

s13, performing emotion recognition on the expression characteristics through a pre-trained second emotion recognition model to obtain a second emotion recognition result;

s14, performing emotion recognition on the biological characteristics through a pre-trained third emotion recognition model to obtain a third emotion recognition result;

s15, fusing the first emotion recognition result, the first emotion recognition result and the third emotion recognition result by using a pre-constructed fusion model to obtain a fusion value;

s16 identifies the emotion of the target object from the fusion value.

The invention makes up the defect of single characteristic by fusing the voice characteristic, the expression characteristic and the biological characteristic.

In an embodiment, the speech features include action features and prosody features. Wherein the motion characteristics can be extracted from the motion signal, including velocity and displacement. The displacement characteristic refers to the maximum displacement relative to the initial position, the speed refers to the displacement variation of the pronunciation organ at each moment, and the maximum speed, the minimum speed, the average speed and the variance of the speed are taken as characteristics. The motion signal can be collected by a three-dimensional electromagnetic sound producing instrument, the instrument can capture the high-precision motion signal without damaging the human body, and the instrument is special equipment for collecting the tiny motion of a sound producing organ. The motion signal includes a lip motion signal, a tongue motion signal, and a jaw motion signal, wherein the lip motion signal includes: the motion signals of the upper lip are different, the motion signals of the lower lip, the motion signals of the left mouth angle and the motion signals of the right mouth angle; the motion signal of the tongue part comprises a motion signal of the tongue tip, a motion signal in the tongue and a motion signal behind the tongue;

prosodic features include fundamental frequency, speech energy, speech rate, short-time average zero-crossing.

The fundamental frequency, the time for each opening and closing of the vocal cords, is defined as the pitch period, and the reciprocal of the pitch period is the pitch frequency, which is called the fundamental frequency for short. The fundamental frequency is determined by the length, thickness, tension, etc. of the vocal cords themselves. Along with the change of different emotions of the speaker, the vocal cord structure can be changed, and simultaneously, the fundamental frequency of the voice can also be changed to different degrees.

The speech energy, speech signal energy characteristic, may represent the intensity and volume of the speaker's voice. The signal intensity shows different change rules along with the change of emotion, and the energy intensity is usually represented by short-term energy and short-term average amplitude.

Speech rate, different emotions, different speech rates used by the speaker. For example, anger or stress, speech rate can be significantly increased or decreased; under the condition of uneasy to feel, the speaking speed will be slowed down correspondingly. The speed of speech can characterize the emotional information of the speaker to some extent.

The short-time average zero-crossing rate is defined as the number of times that the signal passes through the zero point in each period of the voice signal. For a discrete signal, this index may be defined as the number of times that adjacent sample points appear to be of opposite algebraic sign.

Studies suggest that the determining factor of emotion is sympathetic, and that the activity level of sympathetic can be estimated by heart rate variability, which in turn can be obtained by analyzing pulse wave signals. Therefore, the heart rate variability parameter derived from the pulse wave signal analysis has the potential to recognize emotion. Thus, in one embodiment, the biometric characteristic may comprise a pulse characteristic.

In the application, emotion recognition is carried out on the biological characteristics through a third emotion recognition model which is trained in advance, and a third emotion recognition result is obtained; the pulse features include time features, waveform features and frequency domain features.

After the pulse wave features are extracted, the dimension reduction of the pulse wave features is needed; the pulse wave feature dimension reduction is divided into two steps, in the first step, the dimension reduction is carried out by using Principal Component Analysis (PCA), and the first 15 principal components with the maximum feature values are taken as new features. And in the second step, Linear Discriminant Analysis (LDA) is carried out again to reduce the dimension, and finally the pulse wave feature vector with the dimension of 7 is obtained.

And training an artificial neural network by using the pulse feature vector and the emotion corresponding to the pulse feature vector, and taking a training result as a third emotion recognition model.

The method and the device have the advantages that emotion recognition is carried out on the voice features through the first emotion recognition model which is trained in advance, and a first emotion recognition result is obtained. Since human beings have masking effects during the process of auditory perception, weaker energy signals are masked by higher energy signals. Therefore, there is a need for enhancement of the prosodic features.

Specifically, as shown in fig. 2, the enhancing the prosody features includes:

s21, based on the prosody characteristics of the target object, forming an input prosody characteristic sequence by the prosody characteristics of the nth frame and the prosody characteristics of the adjacent frames taking the prosody characteristics of the nth frame as the center;

s22, encoding the input prosody characteristic sequence to obtain an encoding characteristic sequence; after being coded by the coder, the input prosodic feature sequence is changed into a high-level feature sequence.

S23, decoding the coding feature sequence to obtain an enhanced prosody feature sequence corresponding to the input prosody feature sequence. Decoder based on high layer characteristic sequenceZNamely, the characteristic sequence is coded to obtain the enhanced prosody characteristic sequence of the current frame to be enhanced

In one embodiment, when encoding the input sequence of prosodic features, by applying a multi-headed self-attention operation on the adjacent multi-frame prosodic features, speech enhancement performance is improved:

the main function of the coder is to re-encode the input prosodic feature sequence, so that the clean speech information and the noise information are obviously distinguished. The network structure of the encoder consists of independent network layers, which are called transform layers. Each network layer consists of two sublayers: the first layer is a multi-head self-attention layer, and the second layer is a fully-connected feedforward neural network taking a frame as a unit. The two sub-layers are connected with residuals and layer normalization is applied. The encoder structure is shown in fig. 3.

Self-attention means that the query and key-value pairs needed to compute attention are from the same place. For the enhancement of prosody features, a speech signal with high energy can cover a speech signal with low energy, and the features in an input prosody feature sequence can be led to be dominated by clean speech information or noise information by applying self-attention to the input prosody feature sequence, so that clean speech and noise are distinguished. The function for calculating attention uses the following model:

whereinQKVRespectively representing computing attention-related queries, keys and values,d k representing the dimension of the key.

The multi-head attention is based on an attention mechanism, multiple queries are used for extracting multiple groups of different information from input information in parallel for splicing, and related information can be acquired from different subspaces. The multi-head attention firstly maps the query matrix, the key matrix and the value matrix to a plurality of different subspaces, respectively calculates the attention in each subspace, and finally splices the output of each subspace together:

wherein the content of the first and second substances,a parameter matrix which is linear mapping;hthe number of the subspaces is,Cconcatis a vector stitching operation.

The decoder will eventually generate an enhanced prosodic feature sequence using the high-level feature sequence Z generated by the encoder.

In this embodiment, a multi-head self-attention operation is applied to the encoding feature sequence Z generated by the encoder in the decoding stage, and the output of the current frame is taken as the enhanced prosodic feature sequence. The network structure of the decoder is the same as that of the encoder.

In an embodiment, before performing emotion recognition on the speech feature through a first emotion recognition model trained in advance, the method further includes: and performing feature fusion on the action features and the prosody features to obtain fusion features.

Because the physical meanings of the two types of feature expressions of the action feature and the prosody feature of the voice feature are different, the two types of feature expressions are normalized to form a fusion feature, and the fusion feature is obtained.

In an embodiment, before emotion recognition is performed on the voice features through a first emotion recognition model which is trained in advance to obtain a first emotion recognition result, dimensionality reduction is performed on the fusion features through a kernel principal component analysis method.

Since the fused features may contain redundant information, they may be reduced in size using kernel principal component analysis (KP-CA). KP-CA is proposed on the basis of PCA, and compared with PCA, KPCA has better effect on processing nonlinear data. The basic principle is that original data is mapped to a high-dimensional space through a nonlinear function, and therefore the data of the high-dimensional space is subjected to corresponding linear classification.

In this embodiment, the dimension reduction is performed by using a radial basis Gaussian kernel method, and the kernel function formula is as follows

Is constant, needs to be paired in the dimension reduction processAnd (6) carrying out adjustment. In the dimension reduction process, n-dimensional features of the training samples are expressed into a feature matrix of n column vectorsBy non-linear mappingIt is mapped into a high-dimensional space,

the dimension reduction transformation is carried out in a high-dimensional space,

solving forThen obtaining a characteristic matrix after nonlinear dimension reductionX

In an embodiment, as shown in fig. 4, the obtaining of the expression feature of the target object includes:

s41, acquiring a face picture;

s42, extracting single expression features from the face picture through a first neural network;

specifically, the first neural network may select the VGGNetl6 convolutional neural network. The first layer of the VGGNetl6 network adopts an inclusion structure, when an image is input, convolution kernels of multiple scales are used for extracting features of different scales, feature fusion is carried out on the multiple features extracted by the convolution kernels of different sizes, and bottleeck structures are used in 3 x 3 and 5 x 5 convolution kernel branches for reducing dimensions of the two branches. The bottle neck layer principle of bottleeck is that firstly, the convolution layer with convolution kernel scale of 1X1 is used to perform dimensionality reduction operation on the input image, then the number of channels is restored in the output 1X l convolution layer, so that the calculation amount can be greatly reduced, meanwhile, two convolutions of 3X 3 are used to replace the convolution of 5X 5, and the calculation amount is also reduced while the receptive field is ensured.

S43, performing multi-scale extraction on the single expression features through a second neural network to obtain attention features of the single expression features under different scales;

generally speaking, the human face expression is less easily recognized and clearer in smaller scales, so the technical scheme of the invention mainly selects to down-sample the single expression features to obtain more small-scale single expression features, thereby improving the classification recognition capability of the network on the small scale of the object, and in other cases, the technical scheme of the invention can also improve the resolution capability of the object in large scale by up-sampling the single expression features.

The second neural network is a SENET network which comprises 3 parallel attention branches, wherein one attention branch is used for processing the single expressive features of the first scale, the other attention branch is used for processing the single expressive features of the second scale, and the other attention branch is used for processing the single expressive features of the third scale. After the parallel convolution operation is carried out on the single expression features with different scales by the second neural network, scale attention features with different scales are required to be fused. The attention module outputs a vector with the length of C, which reflects the space attention and corresponds to the number of the channels of the feature map one by one, the vector is used for multiplying each channel of the input feature map, and the multiplication results of a plurality of dimensions are spliced together to obtain the final output feature.

And S44, fusing the attention characteristics of the single expression characteristics under different scales to obtain expression characteristics.

In an embodiment, the first emotion recognition result and the third emotion recognition result are fused by using a pre-constructed fusion model to obtain a fusion value;

the fusion model is as follows:

wherein the content of the first and second substances,f(x) The value of the fusion is represented by,xrepresenting feature vectors in speech features, expressive features and biological features,f 1(x) A vector representing a result of the first emotion recognition,f 2(x) A vector representing a result of the second emotion recognition,f 3(x) A vector representing a result of the third emotion recognition,the weight parameter is represented by a weight value,

through the steps, the fusion value can be obtained, the fusion value interval to which the fusion value belongs is determined, and each fusion value interval corresponds to one emotion, so that the emotion of the target object is determined. For example, the fusion value is 3, the obtained fusion value belongs to the interval of 2-4, and the corresponding emotion in the fusion value interval is a negative emotion. In an embodiment, the emotions can be a negative emotion, a positive emotion, and a neutral emotion.

In an embodiment, when determining the emotion category of the target object, the corresponding instruction may be executed to perform emotion grooming on the target object, including: when the emotion category belongs to positive emotions, the target object is encouraged, and when the emotion category belongs to negative emotions, the target object is guided, so that the emotion of the target object is developed from the negative emotions to the positive emotions.

As shown in fig. 5, an embodiment of the present application provides an emotion recognition apparatus, including:

a feature obtaining module 51, configured to obtain a voice feature, an expression feature, and a biological feature of the target object;

the first initial emotion recognition module 52 is configured to perform emotion recognition on the voice features through a first emotion recognition model trained in advance to obtain a first emotion recognition result;

the second initial emotion recognition module 53 is configured to perform emotion recognition on the expression features through a second emotion recognition model which is trained in advance to obtain a second emotion recognition result;

a third initial emotion recognition module 54, configured to perform emotion recognition on the biological feature through a third emotion recognition model trained in advance, so as to obtain a third emotion recognition result;

a fusion module 55, configured to fuse the first emotion recognition result, and the third emotion recognition result by using a pre-constructed fusion model to obtain a fusion value;

and the emotion recognition module 56 is used for recognizing the emotion of the target object according to the fusion value.

The system provided in the above embodiment can execute the method provided in any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method. Technical details that have not been elaborated upon in the above-described embodiments may be referred to a method provided in any embodiment of the invention.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

It should be noted that, through the above description of the embodiments, it is clear to those skilled in the art that part or all of the present application can be implemented by software in combination with a necessary general hardware platform. The functions, if implemented in the form of software functional units and sold or used as a separate product, may also be stored in a computer-readable storage medium with the understanding that embodiments of the present invention provide a computer-readable storage medium including a program which, when run on a computer, causes the computer to perform the method shown in fig. 1.

An embodiment of the present invention provides an image classification device, which includes a processor coupled to a memory, where the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the method shown in fig. 1 is implemented.

With this understanding in mind, the technical solutions of the present application and/or portions thereof that contribute to the prior art may be embodied in the form of a software product that may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may cause the one or more machines to perform operations in accordance with embodiments of the present application. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disc-read only memories), magneto-optical disks, ROMs (read only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The storage medium may be located in a local server or a third-party server, such as a third-party cloud service platform. The specific cloud service platform is not limited herein, such as the Ali cloud, Tencent cloud, etc. The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: a personal computer, dedicated server computer, mainframe computer, etc. configured as a node in a distributed system.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于雷达的车道线识别方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!