Face image restoration method, system and terminal based on multi-feature image layer fusion

文档序号:9214 发布日期:2021-09-17 浏览:45次 中文

1. A face image restoration method based on multi-feature image layer fusion is characterized by comprising the following steps:

acquiring at least two original face images, and carrying out spatial registration on all the original face images to obtain a target face image under a uniform visual angle;

extracting original contour image layers, original gray image layers and original RGB image layers of different target face images;

fusing all the original contour layers to obtain a restored contour layer;

fusing all the original gray level layers to obtain a fused gray level layer, and intercepting from the fused gray level layer according to the restored contour layer to obtain a restored gray level layer;

fusing all the original RGB layers to obtain a restored RGB layer corresponding to the restored gray level layer;

and overlapping the restored gray-scale image layer and the restored RGB image layer on the restored contour image layer to obtain a restored image.

2. The method for restoring the human face image based on the fusion of the multiple feature map layers according to claim 1, wherein the process of obtaining the restored contour map layer by the fusion of the original contour map layers specifically comprises the following steps:

extracting contour curves of different facial features in each original contour map layer, and overlapping all contour curves corresponding to the same facial features at the same center to obtain an integrated line contour;

screening out all non-integrated overlapped dispersed contour segments which are single contour lines in the contour of the integrated line, and remaining the dispersed contour segments which are single contour lines;

selecting a dispersed contour line corresponding to the minimum discrete value in the dispersed contour segment as a reference contour line, and calculating the deviation average value of the unified nodes of all the remaining dispersed contour lines relative to the reference nodes of the reference contour line;

determining integrated nodes corresponding to the unified nodes according to the deviation mean value, and fitting all the integrated nodes to form integrated contour lines;

and embedding all the integrated contour lines into corresponding positions in the integrated contour segments to obtain the restored contour curves of the same facial features, and fusing the restored contour curves of different facial features to obtain the restored contour map layer.

3. The method for restoring the human face image based on the fusion of the multiple feature map layers as claimed in claim 2, wherein the selection process of the reference contour line specifically comprises:

selecting one of the dispersed contour lines as a current dispersed contour line;

calculating a discrete value according to the residual dispersion contour line to obtain a discrete value corresponding to the current dispersion contour line;

and comparing the discrete values corresponding to all the dispersed contour lines, and selecting the dispersed contour line with the minimum discrete value as a reference contour line.

4. The method for restoring the human face image based on the fusion of the multiple feature map layers as claimed in claim 2, wherein the formation process of the unified node and the reference node is specifically as follows:

taking the overlapping center of the contour of the integrated line as an original point as a calibration ray;

selecting the intersection point of the calibration ray and the residual dispersed contour line as a uniform node corresponding to the residual dispersed contour line;

and selecting the intersection point of the same calibration ray and the reference contour line as a reference node.

5. The method for restoring a human face image based on multi-feature map layer fusion according to claim 2, wherein the calculation process of the deviation mean specifically comprises:

wherein the content of the first and second substances,represents the deviation from the mean; dnRepresenting the distance between the nth unified node and the reference node; epsilonnDenotes a deviation coefficient of the nth uniform node, and ∈1、ε2…εnIn an increasing or decreasing change; n represents the number of uniform nodes.

6. The method for restoring the human face image based on the fusion of the multiple characteristic image layers according to any one of claims 1 to 5, wherein the process of obtaining the fusion gray image layer by the fusion of the original gray image layers specifically comprises the following steps:

extracting a gray gradient image from the original gray layer according to the standard gray level difference;

extracting gradient gray level histograms from the center to the edge of the gray level gradient map in all directions;

performing mean value calculation on gradient gray level histograms in the same direction in all original gray level image layers to obtain a mean value histogram;

fitting according to the gray value of each gradient in the mean histogram to obtain a gray change curve in the corresponding direction;

and carrying out gray level equalization processing on the gray values in the corresponding directions according to the gray level change curve to obtain a fusion gray level layer.

7. The method for restoring the human face image based on the fusion of the multiple feature image layers according to any one of claims 1 to 5, wherein the process of obtaining the restored RGB image layer by the fusion of the original RGB image layers specifically comprises the following steps:

extracting RGB-gray data pairs of the same pixel point according to an original RGB layer and an original gray layer in the same target face image, and forming RGB-gray data groups by the RGB-gray data pairs of the same pixel point in all target face images;

training according to the RGB-gray level data group of all the pixel points to obtain a mapping relation between RGB values and gray levels;

and performing RGB value restoration according to the mapping relation and the restoration gray layer to obtain a restoration RGB layer.

8. The method for restoring the human face image based on the fusion of the multiple feature map layers as claimed in claim 7, wherein the mapping relationship between the RGB values and the gray values is obtained by deep learning neural network training, and the deep learning neural network comprises:

the input layer is used for extracting RGB values and gray values of all pixel points in the same target face image to form a plurality of data pairs, and integrating the data pairs of the same pixel points in all the target face images to obtain a data set;

the first hidden layer is used for training the RGB values and the gray values in each data group to obtain a mapping sub-function representing the relationship between the RGB values and the gray values in each data group;

the second hidden layer is used for classifying the mapping subfunctions according to function types to obtain a plurality of function groups, extracting function coefficients of the mapping subfunctions in the function groups of the same type to obtain a coefficient set, and training according to coordinate values of pixel points in the coefficient set and coefficient elements in the corresponding coefficient set to obtain a track function representing track distribution of the mapping subfunctions;

and the output layer is used for integrating all track functions and function types to obtain a mapping network representing the mapping relation between the RGB values and the gray values.

9. A face image restoration system based on multi-feature image layer fusion is characterized by comprising:

the image processing module is used for acquiring at least two original face images and carrying out spatial registration on all the original face images to obtain a target face image under a uniform visual angle;

the image layer extraction module is used for extracting original contour image layers, original gray image layers and original RGB image layers of different target face images;

the contour fusion module is used for fusing all the original contour layers to obtain a restored contour layer;

the gray level fusion module is used for fusing all the original gray level layers to obtain a fused gray level layer, and intercepting the fused gray level layer according to the restored contour layer to obtain a restored gray level layer;

the RGB fusion module is used for fusing all the original RGB layers to obtain a restored RGB layer corresponding to the restored gray level layer;

and the restoration fusion module is used for overlapping the restoration gray layer and the restoration RGB layer on the restoration outline layer to obtain a restoration image.

10. A computer terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method for restoring a human face image based on multi-feature map layer fusion according to any one of claims 1 to 8 when executing the program.

Background

The image fusion refers to that image data which are collected by a multi-source channel and are related to the same target are processed to extract favorable information in each channel to the maximum extent, and finally, the favorable information is synthesized into a high-quality image. The high-efficiency image fusion method can comprehensively process the information of the multi-source channel according to the needs, thereby effectively improving the utilization rate of the image information, the reliability of the system on target detection and identification and the automation degree of the system. The method aims to synthesize the multiband information of a single sensor or the information provided by different sensors, eliminate the redundancy and contradiction possibly existing among multi-sensor information, enhance the transparency of the information in the image, and improve the accuracy, reliability and utilization rate of interpretation so as to form clear, complete and accurate information description of the target. The image fusion is widely applied to the aspects of medicine, remote sensing, computer vision, meteorological forecasting, target identification and the like.

At present, the fusion technology for face images is mostly based on single contour fusion, gradient gray level histogram fusion and RGB value fusion, and the single fusion mode or the combination of the two fusion modes is used for improving the utilization rate of image information, improving the accuracy and reliability of computer interpretation and improving the spatial resolution and spectral resolution of original images. However, due to the influence of natural light, image acquisition supplementary lighting and human face micromotion, even continuously acquired human face image information still has large detail difference in face contour, RGB value and gray value, and the existing human face image fusion process does not consider the influence of the detail difference, RGB value and gray value on the identification contour and the fusion influence between RGB value and gray value, so that the human face image cannot be restored to a real human face image with high precision after being fused.

Therefore, how to research and design a method, a system and a terminal for restoring a face image based on multi-feature image layer fusion is a problem which is urgently needed to be solved at present.

Disclosure of Invention

In order to solve the defects in the prior art, the invention aims to provide a method, a system and a terminal for restoring a face image based on multi-feature-image fusion, and provide technical support for restoring the face image into a high-precision real face image.

The technical purpose of the invention is realized by the following technical scheme:

in a first aspect, a face image restoration method based on multi-feature image layer fusion is provided, which includes the following steps:

acquiring at least two original face images, and carrying out spatial registration on all the original face images to obtain a target face image under a uniform visual angle;

extracting original contour image layers, original gray image layers and original RGB image layers of different target face images;

fusing all the original contour layers to obtain a restored contour layer;

fusing all the original gray level layers to obtain a fused gray level layer, and intercepting from the fused gray level layer according to the restored contour layer to obtain a restored gray level layer;

fusing all the original RGB layers to obtain a restored RGB layer corresponding to the restored gray level layer;

and overlapping the restored gray-scale image layer and the restored RGB image layer on the restored contour image layer to obtain a restored image.

Further, the process of fusing the original contour layer to obtain the restored contour layer specifically comprises the following steps:

extracting contour curves of different facial features in each original contour map layer, and overlapping all contour curves corresponding to the same facial features at the same center to obtain an integrated line contour;

screening out all non-integrated overlapped dispersed contour segments which are single contour lines in the contour of the integrated line, and remaining the dispersed contour segments which are single contour lines;

selecting a dispersed contour line corresponding to the minimum discrete value in the dispersed contour segment as a reference contour line, and calculating the deviation average value of the unified nodes of all the remaining dispersed contour lines relative to the reference nodes of the reference contour line;

determining integrated nodes corresponding to the unified nodes according to the deviation mean value, and fitting all the integrated nodes to form integrated contour lines;

and embedding all the integrated contour lines into corresponding positions in the integrated contour segments to obtain the restored contour curves of the same facial features, and fusing the restored contour curves of different facial features to obtain the restored contour map layer.

Further, the selecting process of the reference contour line specifically includes:

selecting one of the dispersed contour lines as a current dispersed contour line;

calculating a discrete value according to the residual dispersion contour line to obtain a discrete value corresponding to the current dispersion contour line;

and comparing the discrete values corresponding to all the dispersed contour lines, and selecting the dispersed contour line with the minimum discrete value as a reference contour line.

Further, the forming process of the unified node and the reference node specifically includes:

taking the overlapping center of the contour of the integrated line as an original point as a calibration ray;

selecting the intersection point of the calibration ray and the residual dispersed contour line as a uniform node corresponding to the residual dispersed contour line;

and selecting the intersection point of the same calibration ray and the reference contour line as a reference node.

Further, the calculation process of the deviation mean specifically includes:

wherein the content of the first and second substances,represents the deviation from the mean; dnRepresenting the distance between the nth unified node and the reference node; epsilonnDenotes a deviation coefficient of the nth uniform node, and ∈1、ε2…εnIn an increasing or decreasing change; n represents the number of uniform nodes.

Further, the process of obtaining the fused gray-scale layer by fusing the original gray-scale layer specifically comprises the following steps:

extracting a gray gradient image from the original gray layer according to the standard gray level difference;

extracting gradient gray level histograms from the center to the edge of the gray level gradient map in all directions;

performing mean value calculation on gradient gray level histograms in the same direction in all original gray level image layers to obtain a mean value histogram;

fitting according to the gray value of each gradient in the mean histogram to obtain a gray change curve in the corresponding direction;

and carrying out gray level equalization processing on the gray values in the corresponding directions according to the gray level change curve to obtain a fusion gray level layer.

Further, the process of fusing the original RGB layers to obtain the restored RGB layers specifically includes:

extracting RGB-gray data pairs of the same pixel point according to an original RGB layer and an original gray layer in the same target face image, and forming RGB-gray data groups by the RGB-gray data pairs of the same pixel point in all target face images;

training according to the RGB-gray level data group of all the pixel points to obtain a mapping relation between RGB values and gray levels;

and performing RGB value restoration according to the mapping relation and the restoration gray layer to obtain a restoration RGB layer.

Further, the mapping relationship between the RGB values and the gray-scale values is obtained by training a deep learning neural network, and the deep learning neural network includes:

the input layer is used for extracting RGB values and gray values of all pixel points in the same target face image to form a plurality of data pairs, and integrating the data pairs of the same pixel points in all the target face images to obtain a data set;

the first hidden layer is used for training the RGB values and the gray values in each data group to obtain a mapping sub-function representing the relationship between the RGB values and the gray values in each data group;

the second hidden layer is used for classifying the mapping subfunctions according to function types to obtain a plurality of function groups, extracting function coefficients of the mapping subfunctions in the function groups of the same type to obtain a coefficient set, and training according to coordinate values of pixel points in the coefficient set and coefficient elements in the corresponding coefficient set to obtain a track function representing track distribution of the mapping subfunctions;

and the output layer is used for integrating all track functions and function types to obtain a mapping network representing the mapping relation between the RGB values and the gray values.

In a second aspect, a face image restoration system based on multi-feature image layer fusion is provided, which includes:

the image processing module is used for acquiring at least two original face images and carrying out spatial registration on all the original face images to obtain a target face image under a uniform visual angle;

the image layer extraction module is used for extracting original contour image layers, original gray image layers and original RGB image layers of different target face images;

the contour fusion module is used for fusing all the original contour layers to obtain a restored contour layer;

the gray level fusion module is used for fusing all the original gray level layers to obtain a fused gray level layer, and intercepting the fused gray level layer according to the restored contour layer to obtain a restored gray level layer;

the RGB fusion module is used for fusing all the original RGB layers to obtain a restored RGB layer corresponding to the restored gray level layer;

and the restoration fusion module is used for overlapping the restoration gray layer and the restoration RGB layer on the restoration outline layer to obtain a restoration image.

In a third aspect, a computer terminal is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the method for restoring a human face image based on multi-feature map layer fusion according to any one of the first aspect is implemented.

Compared with the prior art, the invention has the following beneficial effects:

1. the original contour image layer, the original gray image layer and the original RGB image layer are creatively extracted from the target face image, and independent fusion of contours, gray values and RGB values in a plurality of target face images is realized, namely errors caused by collecting images by different channels are effectively overcome, and errors caused by fusion of the images by light and facial micromotion are overcome, so that a real image can be more clearly and accurately restored after the plurality of target face images are fused;

2. in the process of fusing the original contour map layers, the contact ratio of contour overlapping can be enhanced through the same center overlapping, the most marginal contours are filtered in a mode of selecting reference contour lines, and finally integrated nodes are determined through deviation mean values obtained through calculation, so that the integrated contour lines obtained through fusion are closer to the real contour;

3. according to the method, the gray level change curve is obtained by performing mean value calculation and fitting on the gradient gray level histogram, the condition that the gray level has jumping distribution due to the influence of reflection, backlight and acquisition time intervals is effectively eliminated, the fused gray level can be natural, trimming or filling processing is performed according to the restored contour map layer, and the phenomenon that the gray level at the edge is overlapped to present a more obvious post-processing trace is avoided;

4. the method fills the RGB values through the mapping relation obtained by deep learning neural network training, so that the presentation effect between the filled RGB values and the gray value is matched with the original effect.

Drawings

The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:

FIG. 1 is an overall flow chart in an embodiment of the present invention;

FIG. 2 is a flowchart illustrating merging of restored contour layers according to an embodiment of the present invention;

FIG. 3 is a schematic diagram of a reference contour line according to an embodiment of the present invention;

FIG. 4 is a schematic diagram of the formation of a unified node and a reference node in an embodiment of the present invention;

fig. 5 is a block diagram of a system in an embodiment of the invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.

Example (b): a face image restoration method based on multi-feature image layer fusion is shown in figure 1 and comprises the following steps:

s1: acquiring at least two original face images, and carrying out spatial registration on all the original face images to obtain a target face image under a uniform visual angle;

s2: extracting original contour image layers, original gray image layers and original RGB image layers of different target face images;

s3: fusing all the original contour layers to obtain a restored contour layer;

s4: fusing all the original gray level layers to obtain a fused gray level layer, and intercepting from the fused gray level layer according to the restored contour layer to obtain a restored gray level layer;

s5: fusing all the original RGB layers to obtain a restored RGB layer corresponding to the restored gray level layer;

s6: and overlapping the restored gray-scale image layer and the restored RGB image layer on the restored contour image layer to obtain a restored image.

It should be noted that, the acquisition of the original face image in the face image restoration process is not limited to the same acquisition device, and the number of the original face images may be multiple, but generally is not more than 6. In addition, the unified view angle processing is to perform spatial domain transformation processing on the original face image.

As shown in fig. 2, the process of fusing the original contour layer to obtain the restored contour layer specifically includes:

s301: extracting contour curves of different facial features in each original contour map layer, and overlapping all contour curves corresponding to the same facial features at the same center to obtain an integrated line contour;

s302: screening out all non-integrated overlapped dispersed contour segments which are single contour lines in the contour of the integrated line, and remaining the dispersed contour segments which are single contour lines;

s303: selecting a dispersed contour line corresponding to the minimum discrete value in the dispersed contour segment as a reference contour line, and calculating the deviation average value of the unified nodes of all the remaining dispersed contour lines relative to the reference nodes of the reference contour line;

s304: determining integrated nodes corresponding to the unified nodes according to the deviation mean value, and fitting all the integrated nodes to form integrated contour lines;

s305: and embedding all the integrated contour lines into corresponding positions in the integrated contour segments to obtain the restored contour curves of the same facial features, and fusing the restored contour curves of different facial features to obtain the restored contour map layer.

It should be noted that the single contour line is not limited to a coincident line with the same track and without a distance, and may also be a contour line obtained by comparing and screening a calculation result with a preset threshold after performing proportion weight calculation through recognition and distance difference.

As shown in fig. 3, the selection process of the reference contour line specifically includes:

s3031: selecting one of the dispersed contour lines as a current dispersed contour line;

s3032: calculating a discrete value according to the residual dispersion contour line to obtain a discrete value corresponding to the current dispersion contour line;

s3033: and comparing the discrete values corresponding to all the dispersed contour lines, and selecting the dispersed contour line with the minimum discrete value as a reference contour line.

Taking the local dispersion contour segment with the number of the selected dispersion contour lines being 5 as an example, the center point is O, and the number of the selected dispersion contour lines is A, B, C, D, E. Through calculation, the dispersion value corresponding to the dispersion contour line a is 0.82, the dispersion value corresponding to the dispersion contour line B is 0.84, the dispersion value corresponding to the dispersion contour line C is 0.88, the dispersion value corresponding to the dispersion contour line D is 0.83, and the dispersion value corresponding to the dispersion contour line E is 0.79. And selecting the dispersed contour line E as a reference contour line because the dispersed contour line E has the minimum discrete value corresponding to the dispersed contour line E.

As shown in fig. 4, the forming process of the unified node and the reference node specifically includes:

s3034: taking the overlapping center of the contour of the integrated line as an original point as a calibration ray;

s3035: selecting the intersection point of the calibration ray and the residual dispersed contour line as a unified node corresponding to the residual dispersed contour line, as shown by points a, b, c and d in the figure;

s3036: and selecting the intersection point of the same calibration ray and the reference contour line as a reference node, as shown by the point e in the figure.

The calculation process of the deviation mean value specifically comprises the following steps:

wherein the content of the first and second substances,represents the deviation from the mean; dnRepresenting the distance between the nth unified node and the reference node; epsilonnDenotes a deviation coefficient of the nth uniform node, and ∈1、ε2…εnIn an increasing or decreasing change; n represents the number of uniform nodes.

The process of obtaining the fused gray layer by fusing the original gray layer specifically comprises the following steps:

s401: extracting a gray gradient image from the original gray layer according to the standard gray level difference;

s402: extracting gradient gray level histograms from the center to the edge of the gray level gradient map in all directions;

s403: performing mean value calculation on gradient gray level histograms in the same direction in all original gray level image layers to obtain a mean value histogram;

s404: fitting according to the gray value of each gradient in the mean histogram to obtain a gray change curve in the corresponding direction;

s405: and carrying out gray level equalization processing on the gray values in the corresponding directions according to the gray level change curve to obtain a fusion gray level layer.

The process of obtaining the restored RGB layer by fusing the original RGB layer specifically comprises the following steps:

s501: extracting RGB-gray data pairs of the same pixel point according to an original RGB layer and an original gray layer in the same target face image, and forming RGB-gray data groups by the RGB-gray data pairs of the same pixel point in all target face images;

s502: training according to the RGB-gray level data group of all the pixel points to obtain a mapping relation between RGB values and gray levels;

s503: and performing RGB value restoration according to the mapping relation and the restoration gray layer to obtain a restoration RGB layer.

In this embodiment, the mapping relationship between the RGB values and the gray-scale values is obtained by deep learning neural network training. The deep learning neural network comprises an input layer, a first hidden layer, a second hidden layer and an output layer.

Wherein the input layer: and extracting the RGB values and the gray values of all the pixel points in the same target face image to form a plurality of data pairs, and integrating the data pairs of the same pixel points in all the target face images to obtain a data set.

A first hidden layer: and training the RGB values and the gray values in each data group to obtain a mapping subfunction representing the relationship between the RGB values and the gray values in each data group.

A second hidden layer: the function set is used for classifying the mapping subfunctions according to the function types to obtain a plurality of function sets, extracting the function coefficients of the mapping subfunctions in the function sets of the same type to obtain a coefficient set, and training according to the coordinate values of the pixel points in the coefficient set and the coefficient elements in the corresponding coefficient set to obtain the track function representing the track distribution of the mapping subfunctions.

An output layer: and integrating all track functions and function types to obtain a mapping network representing the mapping relation between the RGB values and the gray values.

And filling the RGB values through a mapping relation obtained by deep learning neural network training, so that the presentation effect between the filled RGB values and the gray value is accurately matched with the original effect.

Example 2: as shown in fig. 5, the face image restoration system based on multi-feature image layer fusion includes an image processing module, an image layer extraction module, a contour fusion module, a gray level fusion module, an RGB fusion module, and a restoration fusion module.

The image processing module is used for acquiring at least two original face images and carrying out spatial registration on all the original face images to obtain a target face image under a uniform visual angle. And the layer extraction module is used for extracting original contour layers, original gray level layers and original RGB layers of different target face images. And the contour fusion module is used for fusing all the original contour layers to obtain a restored contour layer. And the gray level fusion module is used for fusing all the original gray level layers to obtain a fused gray level layer, and intercepting the fused gray level layer according to the restored contour layer to obtain a restored gray level layer. And the RGB fusion module is used for fusing all the original RGB layers to obtain a restored RGB layer corresponding to the restored gray level layer. And the restoration fusion module is used for overlapping the restoration gray layer and the restoration RGB layer on the restoration outline layer to obtain a restoration image.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:柱塞泵故障信号时频图降噪增强方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!