Countermeasure sample defense method for acquiring low-frequency information based on average compression

文档序号:9409 发布日期:2021-09-17 浏览:55次 中文

1. A defense method for confrontation samples for acquiring low-frequency information based on average compression is characterized by comprising the following steps:

carrying out average compression on pixel points in an original training image, and then filling the compressed pixel points into a first low-frequency information image with the same size as the original training image according to a compression ratio;

generating a first convolution neural network model and a second neural network model which are identical, wherein the first convolution neural network is trained by using an original training image, and the second neural network model is trained by using a low-frequency information image;

adding disturbance into an original test image to generate a countermeasure sample;

carrying out average compression on pixel points in the confrontation sample, and then filling the compressed pixel points into a second low-frequency information image with the same size as the confrontation sample according to a compression ratio;

inputting the confrontation sample into the first neural network model for recognition to obtain a first recognition result;

inputting the second low-frequency information image into the second neural network model for identification to obtain a second identification result;

and synthesizing the first recognition result and the second recognition result to obtain a final recognition result.

2. The method for defending the confrontation sample based on average compression to obtain the low-frequency information as claimed in claim 1, wherein the pixels in the original training image are compressed averagely, and then the compressed pixels are filled into the first low-frequency information image with the same size as the original training image according to a compression ratio:

calculating the average value of pixel values of four adjacent points with the shape of 2 multiplied by 2 in the original training image, and then filling the average value to the size of the original training image according to the proportion of one to four to obtain a first low-frequency information image.

3. The method for defending the confrontation sample based on average compression to obtain the low-frequency information as claimed in claim 1, wherein the step of average compressing the pixel points in the confrontation sample and then filling the compressed pixel points into the second low-frequency information image with the same size as the confrontation sample according to the compression ratio comprises the following steps:

and calculating the average value of the pixel values of four adjacent points with the shape of 2 multiplied by 2 in the countermeasure sample, and filling the average value into the size of the original training image according to the proportion of one to four to obtain a second low-frequency information image.

4. The method for defending the confrontation sample based on average compression to obtain the low-frequency information as claimed in claim 1, wherein the confrontation sample and the original test image have a relationship of:

in the formula, theta represents model parameters, x is an original test image, x' is a countermeasure sample, y is a label corresponding to x, J () is a loss function,to gradient the loss function over x, ε is the perturbation value.

5. The method for defending countermeasure sample based on average compression to obtain low-frequency information as claimed in claim 4, wherein the step of generating countermeasure sample comprises:

and converting the original test image into confrontation samples with different disturbance sizes by adopting a mode of adjusting the size of epsilon according to the relationship between the confrontation samples and the original test image.

6. The method for defending the confronted sample based on average compression to obtain the low-frequency information as claimed in claim 1, wherein the step of integrating the first recognition result and the second recognition result comprises:

and correspondingly adding the corresponding values of the first recognition result and the second recognition result to obtain a final recognition result.

7. The method for defending the confronted sample based on average compression to obtain the low-frequency information as claimed in claim 1, wherein the step of integrating the first recognition result and the second recognition result comprises:

detecting a disturbance value of the countermeasure sample, and outputting the second identification result as a final identification result when the disturbance value is larger than a preset value; and outputting the first recognition result as a final recognition result when the disturbance value is smaller than or equal to a preset value.

8. The method for defending confrontation samples based on average compression to obtain low-frequency information as claimed in claim 1, wherein said first neural network model adopts LeNet convolution neural network model.

9. The method for defending antagonistic samples against low-frequency information based on mean compression as claimed in claim 1, wherein said second neural network model employs a LeNet convolutional neural network model.

10. A system for defending a confrontational sample from low-frequency information based on average compression, comprising:

the first low-frequency information extraction module is used for averagely compressing pixel points in an original training image and then filling the compressed pixel points into a first low-frequency information image with the same size as the original training image according to a compression ratio;

the model generation module is used for generating a first convolution neural network model and a second neural network model which are the same, wherein the first convolution neural network is trained by using an original training image, and the second neural network model is trained by using a low-frequency information image;

the sample generation module is used for adding disturbance into the original test image to generate a countermeasure sample;

the second low-frequency information extraction module is used for carrying out average compression on pixel points in the countermeasure sample and then filling the compressed pixel points into a second low-frequency information image with the same size as the countermeasure sample according to a compression ratio;

the first identification module is used for inputting the confrontation sample into the first neural network model for identification to obtain a first identification result;

the second identification module is used for inputting the second low-frequency information image into the second neural network model for identification to obtain a second identification result;

and the comprehensive module is used for synthesizing the first recognition result and the second recognition result to obtain a final recognition result.

Background

Deep neural networks are widely used in life, and the application of convolutional neural networks in the visual direction of computers is considered to be one of the most successful applications of neural networks. At present, the convolutional neural network has higher requirements on the safety and robustness of a model in the aspects of character recognition, face recognition, target detection, intelligent transportation, automatic driving and the like. The confrontational sample is defined as a disturbance that is added to the image that is not detectable by humans or that is detectable by humans but does not affect the recognition. For human vision, image information is two-dimensional information, and human beings can recognize when seeing an approximate shape without carefully distinguishing specific numerical information in the image. As such, humans are much less sensitive to high frequency information in the image than low frequency information. For the image, for the model, the dimensionality is determined by the number of the pixel points and is high-dimensional information, so that the image is very sensitive to high-frequency information. If only low frequency information in the image is identified, the identification accuracy of the model will be greatly reduced.

The existing defense method for the confrontation sample is mainly divided into three parts corresponding to three directions:

firstly, corresponding to a training process: the model weight is adjusted by modifying or increasing the training samples in the direction, so that the influence of disturbance in the model prediction process is reduced, and the purpose of improving the robustness of the model is achieved. For example, challenge samples are added to the training set. II, corresponding to the model part: by modifying the model, the extracted features are changed or the weight of the extracted features is influenced, and the influence of disturbance on a prediction result is reduced, so that the aim of improving the robustness of the model is fulfilled. Thirdly, corresponding to the data part: through data processing, the part of the data which can improve the robustness of the model is enhanced, and the importance of the part of the data in model prediction is highlighted, so that the aim of improving the robustness of the model is fulfilled. The first method and the second method are both expected to reduce the weight of the pixel points which are possibly disturbed when the model is predicted, so that the influence of the added disturbance on the prediction result is as small as possible, and the robustness of the model is improved. And the third method is to reduce or filter out disturbance as much as possible by data preprocessing before prediction, and complete correct prediction under the condition of not influencing the weight of the model. However, the existing methods all need to change the parameters of the original model to achieve the effect of improving the robustness of the model, for example, increasing the addition of a countersample into a training set, changing the extracted features or influencing the weight of the extracted features, etc., may influence the accuracy of the model in predicting a clean picture.

Disclosure of Invention

The embodiment of the application provides a countermeasure sample defense method for acquiring low-frequency information based on average compression, which can improve the robustness of a model under the condition of not influencing the precision of the model, so that the accuracy of model prediction is increased when the attack of the countermeasure sample is defended.

The application provides a defense method for confrontation samples based on average compression to obtain low-frequency information, which comprises the following steps: carrying out average compression on pixel points in an original training image, and then filling the compressed pixel points into a first low-frequency information image with the same size as the original training image according to a compression ratio; generating a first convolution neural network model and a second neural network model which are identical, wherein the first convolution neural network is trained by using an original training image, and the second neural network model is trained by using a low-frequency information image; adding disturbance into an original test image to generate a countermeasure sample; carrying out average compression on pixel points in the confrontation sample, and then filling the compressed pixel points into a second low-frequency information image with the same size as the confrontation sample according to a compression ratio; inputting the confrontation sample into the first neural network model for recognition to obtain a first recognition result; inputting the second low-frequency information image into the second neural network model for identification to obtain a second identification result; and synthesizing the first recognition result and the second recognition result to obtain a final recognition result.

Optionally, the pixel points in the original training image are compressed averagely, and then the compressed pixel points are compressed and filled into a first low-frequency information image with the same size as the original training image according to a compression ratio: calculating the average value of pixel values of four adjacent points with the shape of 2 multiplied by 2 in the original training image, and then filling the average value to the size of the original training image according to the proportion of one to four to obtain a first low-frequency information image.

Optionally, the average compression of the pixel points in the confrontation sample, and then filling the compressed pixel points into the second low-frequency information image with the same size as the confrontation sample according to the compression ratio includes the following steps: and calculating the average value of the pixel values of four adjacent points with the shape of 2 multiplied by 2 in the countermeasure sample, and filling the average value into the size of the original training image according to the proportion of one to four to obtain a second low-frequency information image.

Optionally, the relationship between the challenge sample and the original test image is:

in the formula, theta represents model parameters, x is an original test image, x' is a countermeasure sample, y is a label corresponding to x, J () is a loss function,to gradient the loss function over x, ε is the perturbation value.

Optionally, the step of generating the challenge sample comprises: and converting the original test image into confrontation samples with different disturbance sizes by adopting a mode of adjusting the size of epsilon according to the relationship between the confrontation samples and the original test image.

Optionally, the step of integrating the first recognition result and the second recognition result includes: and correspondingly adding the corresponding values of the first recognition result and the second recognition result to obtain a final recognition result.

Optionally, the step of integrating the first recognition result and the second recognition result includes: detecting a disturbance value of the countermeasure sample, and outputting the second identification result as a final identification result when the disturbance value is larger than a preset value; and outputting the first recognition result as a final recognition result when the disturbance value is smaller than or equal to a preset value.

Optionally, the first neural network model adopts a LeNet convolutional neural network model.

Optionally, the second neural network model employs a LeNet convolutional neural network model.

The application also provides a confrontation sample defense system based on average compression acquisition low-frequency information, which comprises: the first low-frequency information extraction module is used for averagely compressing pixel points in an original training image and then filling the compressed pixel points into a first low-frequency information image with the same size as the original training image according to a compression ratio; the model generation module is used for generating a first convolution neural network model and a second neural network model which are the same, wherein the first convolution neural network is trained by using an original training image, and the second neural network model is trained by using a low-frequency information image; the sample generation module is used for adding disturbance into the original test image to generate a countermeasure sample; the second low-frequency information extraction module is used for carrying out average compression on pixel points in the countermeasure sample and then filling the compressed pixel points into a second low-frequency information image with the same size as the countermeasure sample according to a compression ratio; the first identification module is used for inputting the confrontation sample into the first neural network model for identification to obtain a first identification result; the second identification module is used for inputting the second low-frequency information image into the second neural network model for identification to obtain a second identification result; and the comprehensive module is used for synthesizing the first recognition result and the second recognition result to obtain a final recognition result.

According to the technical scheme, the embodiment of the application has the following advantages:

the countermeasure sample defense method for obtaining the low-frequency information based on average compression comprises the steps of respectively identifying a countermeasure sample and a second low-frequency information image extracted according to the countermeasure sample compression through a first convolution neural network model and a second neural network model, and then integrating identification results of the two models. By combining the two results, the accuracy is obviously improved when confronted with the challenge sample, and a good effect is achieved.

Drawings

In order to express the technical scheme of the embodiment of the invention more clearly, the drawings used for describing the embodiment will be briefly introduced below, and obviously, the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained by those skilled in the art without creative efforts.

FIG. 1 is a flowchart of a countermeasure sample defense method for obtaining low frequency information based on average compression in an embodiment of the present disclosure;

fig. 2 is a block diagram of a countermeasure sample defense system for acquiring low-frequency information based on average compression in an embodiment of the present specification.

Detailed Description

In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Referring to fig. 1, an embodiment of the present application provides a defense method for confrontation samples based on average compression to obtain low-frequency information, including the following steps:

and averagely compressing pixel points in the original training image, and then filling the compressed pixel points into a first low-frequency information image with the same size as the original training image according to a compression ratio. The average compression means that a plurality of adjacent pixel points in the original image are compressed into one pixel point, and the value of the compressed pixel point is the average value of the compressed pixel points.

Generating a same first convolutional neural network model trained using the original training image and a second neural network model trained using the low frequency information image. The training sets of the first convolution neural network model and the second neural network model are unprocessed original training images and low-frequency information images respectively and are used for identifying unprocessed clean images and low-frequency images respectively.

Adding disturbance into an original test image to generate a countermeasure sample, wherein the countermeasure sample is the disturbance which cannot be detected by human or the disturbance which can be detected by human but does not influence the identification.

The pixel points in the countermeasure samples are compressed averagely, and then the compressed pixel points are filled into a second low-frequency information image with the same size as the countermeasure sample according to the compression ratio, so that the embodiment has two groups of test sets, one group is the countermeasure sample without low-frequency processing, and the other group is the countermeasure sample after low-frequency processing, namely the second low-frequency information image.

And inputting the confrontation sample into the first neural network model for recognition to obtain a first recognition result. And inputting the second low-frequency information image into the second neural network model for identification to obtain a second identification result. And synthesizing the first recognition result and the second recognition result to obtain a final recognition result. By integrating the recognition results of the two models, when a clean original test image and a countermeasure sample with small disturbance are detected, the first neural network model is sensitive to high-frequency information and the disturbance has small influence on the result, so that the recognition result of the first neural network model is more reliable at the moment, and when the disturbance of the countermeasure sample is high, the second neural network model is insensitive to the disturbance of high frequency and is less influenced, and part of the disturbance can be filtered in compression, so that the recognition result of the second neural network model is more reliable at the moment. By combining the two results, the accuracy is obviously improved when confronted with the confrontation sample with larger disturbance, and the accuracy of the confrontation sample with original test image and smaller disturbance is also very high, thus obtaining good effect.

Further, in a specific embodiment of the present application, the pixel points in the original training image are compressed averagely, and then the compressed pixel points are filled into the first low-frequency information image with the same size as the original training image according to a compression ratio: calculating the average value of pixel values of four adjacent points with the shape of 2 multiplied by 2 in the original training image, and then filling the average value to the size of the original training image according to the proportion of one to four to obtain a first low-frequency information image. Meanwhile, the average compression is carried out on the pixel points in the confrontation sample, and then the compressed pixel points are filled into a second low-frequency information image with the same size as the confrontation sample according to the compression ratio, and the method comprises the following steps: and calculating the average value of the pixel values of four adjacent points with the shape of 2 multiplied by 2 in the countermeasure sample, and filling the average value into the size of the original training image according to the proportion of one to four to obtain a second low-frequency information image. In another embodiment, other adjacent pixel points of other lines in the original training image may also be compressed into one pixel point, for example, 6 pixel points with a shape of 2 × 3 or 9 pixel points with a shape of 3 × 3 are compressed, the filling ratio after compression is correspondingly changed into 6 or 9, and meanwhile, the countermeasure sample is also compressed and filled in the same form.

Further, in a specific embodiment of the present application, the relationship between the challenge sample and the original test image is:

in the formula, theta represents model parameters, x is an original test image, x' is a countermeasure sample, y is a label corresponding to x, J () is a loss function,to gradient the loss function over x, ε is the perturbation value. The step of generating a challenge sample comprises: and converting the original test image into confrontation samples with different disturbance sizes by adopting a mode of adjusting the size of epsilon according to the relationship between the confrontation samples and the original test image.

Further, in a specific embodiment, the step of integrating the first recognition result and the second recognition result includes: and correspondingly adding the corresponding values of the first recognition result and the second recognition result to obtain a final recognition result. In another embodiment, the step of integrating is: detecting a disturbance value of the countermeasure sample, and outputting the second identification result as a final identification result when the disturbance value is larger than a preset value; and outputting the first recognition result as a final recognition result when the disturbance value is smaller than or equal to a preset value. The two modes can fully combine the two results, so that the accuracy is obviously improved when confronted with the confrontation sample with larger disturbance, the accuracy of the confrontation sample with the original test image and the smaller disturbance is also high, and a good effect is obtained.

Further, the first neural network model and the second neural network model both adopt LeNet convolutional neural network models. The image data are different in size, the classic network structure is modified, and particularly, the INPUT and the full connection layer INPUT of the image data are modified so that the image data can be applied to the image with the corresponding size.

To verify the effects of the embodiments of the present application, the following verification tests were performed.

The experiment of this example will use the mnist dataset as the training and attacking dataset. The mnist dataset contains 60,000 examples for training and 10,000 examples for testing. These numbers have been standardized in size and are centered in the image, which is a fixed size (28 x 28 pixels) with values from 0 to1. The network model uses a classical convolutional neural network LeNet to generate a first neural network model and a second neural network model. In the test, the first low-frequency information image is obtained by calculating the average value of pixel values of four adjacent points with the shape of 2 × 2 in the original training image and then filling the average value to the size of the original training image according to a ratio of one to four. The countermeasure sample is in accordance with the relationship of the countermeasure sample and the original test image:and (4) generating. The results of the experiment are shown in table 1.

TABLE 1

From table 1, it can be seen that the low-frequency information of the image extracted by using the average compression method assists in prediction, and the robustness of the model is improved when defending against sample attack. When the model of the embodiment is used for identification, the identification accuracy of the model is improved by more than 15% when epsilon is more than 0.10 and less than 0.20, and the highest identification accuracy reaches 19%. When the disturbance is low, namely the human eyes cannot detect the disturbance, the reduction of the model accuracy is very small. When the disturbance is large, the human eye can actually distinguish that the disturbance is added, and some of the disturbance causes disturbance to the human eye identification, and some of the disturbance which can be successfully identified begins to become difficult. The perturbations can also be difficult to filter during compression. The modified model is significantly improved in accuracy in the face of challenge samples. Although the effect is not particularly significantly improved when the disturbance is large, a part of the image is too destructive due to the disturbance, which may actually affect the judgment of human eyes. This has deviated to some extent from the definition of challenge samples. Therefore, the method provided by the embodiment of the application has a good effect in defending the confrontation sample.

Referring to fig. 2, the present application further provides a countermeasure sample defense system for acquiring low frequency information based on average compression, including: the first low-frequency information extraction module 1 is used for averagely compressing pixel points in an original training image and then filling the compressed pixel points into a first low-frequency information image with the same size as the original training image according to a compression ratio; the model generation module 2 is used for generating a first convolution neural network model and a second neural network model which are the same, wherein the first convolution neural network is trained by using an original training image, and the second neural network model is trained by using a low-frequency information image; the sample generation module 3 is used for adding disturbance into the original test image to generate a confrontation sample; the second low-frequency information extraction module 4 is used for carrying out average compression on pixel points in the countermeasure sample, and then filling the compressed pixel points into a second low-frequency information image with the same size as the countermeasure sample according to a compression ratio; the first identification module 5 is used for inputting the confrontation sample into the first neural network model for identification to obtain a first identification result; the second identification module 6 is used for inputting the second low-frequency information image into the second neural network model for identification to obtain a second identification result; and the synthesis module 7 is used for synthesizing the first recognition result and the second recognition result to obtain a final recognition result.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and the action and effect are also the same, and are not described herein again.

The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种基于多叉树的空域多层级栅格表征和冲突检测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!