RBF neural network-based intelligent prediction method for ammonia concentration in household garbage collection process
1. An intelligent prediction method for ammonia concentration in a household garbage collection process based on an RBF neural network is characterized by comprising the following steps:
step 1, collecting field data;
through installing in real-time collection of equipment of domestic waste classification post house, storage ammonia NH3Concentration data, the sampling interval time is 30 seconds;
step 2, determining input and output variables of a prediction model;
the input variables of the prediction model are expressed as x ═ x (x)1,x2,x3,x4)TRespectively represent ammonia NH at t-3, t-2, t-1 and t3Concentration, NH at the moment when the output variable y of the model is t +23Concentration;
step 3, designing an RBF neural network, and establishing a prediction model;
step 4, using the test data as the input of the prediction model, wherein the output of the model is the ammonia NH of 60 seconds in the future3Predicting the concentration value;
in step 3, the output calculation method of the prediction model based on the RBF neural network is as follows:
inputting a layer: this layer consists of 4 input neurons, the output of each neuron being:
ui=xi (1)
in the formula uiIs the output of the ith input neuron, xiCorresponding to the ith component of the input vector;
② hidden layer: the hidden layer consists of J neurons, the output of each neuron being:
in the formula phij(x) Representing the output of the jth hidden layer neuron when the input vector x enters the network, cjIs the central vector of the jth hidden node, σjThe width value of the jth hidden node;
output layer: the output of the output layer is:
where y is the output of the RBF neural network, i.e., the prediction model, wjFor the connection weight, Φ, of the jth hidden layer neuron to the output neuronjThe output for the jth hidden layer neuron;
the step 3 specifically comprises the following steps:
firstly, at the initial moment, the number of neurons in a hidden layer of the network is 0;
searching a sample with the maximum absolute value of the output value, and adding a first neuron based on the sample information;
at the initial moment, the number of the neurons in the hidden layer is 0, and the data sample corresponding to the maximum absolute residual error of the current network is the sample k with the maximum absolute expected output1:
k1=argmax[||yd1||,||yd2||,...,||ydp||,...||ydP||] (4)
Wherein P represents the number of training samples, ydpRepresenting the expected output of the p-th sample, the first RBF neuron parameter is set as follows:
σ1=1 (7)
in the formula, c1、w1And σ1Respectively the central vector, weight and width of the first RBF neuron,andare respectively a sample k1The input vector and the desired output of (c);
thirdly, adjusting the network parameters by using a second-order learning algorithm, wherein the method comprises the following steps:
Ψ(η+1)=Ψ(η)-(H(η)+λI(η))-1Ω(η) (8)
in the formula, η is the iteration step number of parameter adjustment, H is a hessian-like matrix, λ is the learning rate, I is an identity matrix, Ω is a gradient vector, Ψ refers to all network parameters to be adjusted:
Ψ(η)=[c1(η),σ1(η),w1(η)] (9)
to reduce computational complexity, the hessian-like matrix H is converted to the sum of P hessian-like sub-matrices H and the gradient vector Ω is converted to the sum of P gradient sub-vectors g, as follows:
when the network parameter is regulated for the eta times, the p-type Hessian submatrix hp(η) and gradient subvector gp(η) is calculated as follows:
in the formula, ep(η) is adjusted at the η thPredicted output y of the p-th sample in the whole timep(η) and desired output ydpDifference of jp(η) is the Jacobian vector, calculated as follows:
ep(η)=yp(η)-ydp (14)
using the chain rule, the elements in the Jacobian vector of equation (16) are calculated as follows:
in the formula, xp1、xp2、xp3And xp4Input 4 components of the vector, c, for the p-th sample, respectively1(η)=(c11(η),c12(η),c13(η),c14(η))TIs the central vector at the η th adjustment, w1(η) and σ1(η) is the weight and width at the time of the η adjustment, yp(η) is the network output at the η -th adjustment;
calculating error vectors, finding the positions of error peak points, adding the first neuron at the positions, and adjusting network parameters by using a second-order learning algorithm in the third step;
for all training set samples, the error vectors are calculated as follows:
e=[e1,e2,...,ep,...eP]T (22)
the error value for the p-th sample is calculated as follows:
ep=yp-ydp (23)
in the formula, ydpAnd ypThe expected output and the network output of the p sample are respectively; searching the position of the current error peak point:
k=argmax||e|| (24)
adding new first RBF neuron based on the information of the kth sample, wherein the center vector c of the new RBF neuronlAnd the output weight wlThe settings were as follows:
cl=xk (25)
wl=ydk-yk (26)
in the formula, xkIs the input vector of the kth sample, ydkAnd ykThe expected output and the network output of the sample are respectively;
when the following relation is satisfied, the influence of the existing neurons on the newly added neurons is small:
cmin=argmin(dist(cl,cj≠l)) (28)
wherein, cminThe central vector of the neuron closest to the first RBF neuron can be obtainedThe following relationships:
σl≤0.7||cl-cmin|| (29)
during the experiment, the width of the first RBF neuron is set as sigmal=0.7||cl-cmin||;
Then, the formula (8) in the third step is adopted to adjust the network parameters;
if the number of RBF neurons reaches JmaxOr the network learning precision reaches E0Then completing the design of the RBF neural network and the establishment of a prediction model; in the course of the experiment Jmax=10,E0The learning accuracy of the network is measured by mean square error MSE (0.0001), and the calculation is as follows:
in the formula, ydpAnd ypRespectively, the expected output and the network output of the P-th sample, where P is the number of training samples.
Background
Along with the rapid development of the economy and the continuous acceleration of the urbanization process in China, the yield of municipal domestic garbage is increased day by day. In order to improve the garbage treatment effect, a series of policies are issued in sequence by the nation to promote the garbage classification work and prevent the environmental risk. The perishable garbage has a large proportion in domestic garbage, high degradability and easy decay, is a source of malodorous gas generated in the process of collecting the domestic garbage and seriously risks physical and psychological health and living environment of residents. Ammonia gas is one of the main components of malodorous gas, and corresponding measures can be taken to control ammonia gas emission when the future ammonia gas concentration trend is predicted. Therefore, the method realizes accurate prediction of the ammonia gas concentration trend and has important theoretical significance and application value.
Disclosure of Invention
The invention aims to provide an intelligent ammonia gas prediction method in a household garbage collection process based on an RBF neural network.
The invention adopts the following technical scheme and implementation steps:
(1) data acquisition: ammonia concentration data are collected and stored through detection equipment arranged in a garbage classification post house, and the sampling interval time is 30 seconds;
(2) determining input and output variables of the prediction model: the input variables of the prediction model are expressed as x ═ x (x)1,x2,x3,x4)TRespectively represent ammonia NH at t-3, t-2, t-1 and t3Concentration, the concentration of ammonia NH3 at the moment when the output variable y of the model is t + 2;
(3) designed for ammonia NH3An RBF neural network model for intelligently predicting concentration: based on training data, establishing a RBF neural network to realize intelligent ammonia prediction in the process of collecting the household garbage, and then carrying out ammonia NH at the t +2 moment3The predicted concentration value is calculated as follows:
in the formula, y is the output of the RBF neural network prediction model, x is t-3, t-2, t-1 and the ammonia NH at the time t3Input vector of concentration composition, wjThe connection weight, phi, of the jth hidden layer neuron to the output neuronjFor the output of the jth hidden layer neuron, J is the number of hidden layer neurons, and the RBF design process is as follows:
firstly, at the initial moment, the number of neurons in a hidden layer of the network is 0;
searching a sample with the maximum absolute value of the output value, and adding a first neuron based on the sample information;
at the initial moment, the number of the neurons in the hidden layer is 0, and the data sample corresponding to the maximum absolute residual error of the current network is the sample k with the maximum absolute expected output1:
k1=arg max[||yd1||,||yd2||,...,||ydp||,...||ydP||] (2)
Wherein P represents the number of training samples, ydpRepresenting the expected output of the p-th sample. Thus, adding the first RBF neuron sets the following:
σ1=1 (5)
in the formula, c1、w1And σ1Respectively the central vector, weight and width of the first RBF neuron,andare respectively a sample k1The input vector and the desired output of (c);
thirdly, adjusting the network parameters by adopting a second-order learning algorithm, and calculating as follows:
Ψ(η+1)=Ψ(η)-(H(η)+λI(η))-1Ω(η) (6)
in the formula, η is an iteration step number of parameter adjustment (set to 50 in the experiment), H is a hessian-like matrix, λ is a learning rate (0.01 in the experiment), I is an identity matrix, Ω is a gradient vector, and Ψ refers to all network parameters to be adjusted:
Ψ(η)=[c1(η),σ1(η),w1(η)] (7)
to reduce computational complexity, the hessian-like matrix H is converted to the sum of P hessian-like sub-matrices H and the gradient vector Ω is converted to the sum of P gradient sub-vectors g, as follows:
when the network parameter is regulated for the eta times, the p-type Hessian submatrix hp(η) and gradient subvector gp(η) is calculated as follows:
in the formula, ep(η) is the predicted output y of the p sample at the time of the η adjustmentp(η) and desired output ydpDifference of jp(η) is the Jacobian vector, calculated as follows:
ep(η)=yp(η)-ydp (12)
using the chain rule, the elements in the Jacobian vector of equation (16) are calculated as follows:
in the formula, xp1、xp2、xp3And xp4Input 4 components of the vector, c, for the p-th sample, respectively1(η)=(c11(η),c12(η),c13(η),c14(η))TIs the central vector at the η th adjustment, w1(η) and σ1(η) is the weight and width at the time of the η adjustment, yp(η) is the network output at the η -th adjustment;
calculating error vectors, finding the positions of error peak points, adding the first neuron at the positions, and adjusting network parameters by using a second-order learning algorithm in the third step;
for all training set samples, the error vectors are calculated as follows:
e=[e1,e2,...,ep,...eP]T (20)
the error value for the p-th sample is calculated as follows:
ep=yp-ydp (21)
in the formula, ydpAnd yp(t)The expected output and the network for the p-th sample, respectively; searching the position of the current error peak point:
k=arg max||e|| (22)
adding new first RBF neuron based on the information of the kth sample, wherein the center vector c of the new RBF neuronlAnd the output weight wlThe settings were as follows:
cl=xk (23)
wl=ydk-yk (24)
in the formula, xkIs the input vector of the kth sample, ydkAnd ykThe expected output and the network output of the sample are respectively;
in order to avoid the redundancy of the network structure, the existing RBF neurons are expected to have small influence on the newly added neurons. Therefore, it is set that existing neurons have a small influence on the newly added neuron when the following relationship is satisfied:
cmin=arg min(dist(cl,cj≠l)) (26)
wherein, cminFor the nearest neuron center vector to the ith RBF neuron, the following relationship can be obtained:
σl≤0.7||cl-cmin|| (27)
during the experiment, take σl=0.7||cl-cmin||;
Then, the formula (7) in the third step is adopted to adjust the network parameters;
if the number of RBF neurons reaches JmaxOr the network learning precision reaches E0Then completing the design of the RBF neural network and the establishment of a prediction model; in the course of the experiment Jmax=10,E0The learning accuracy of the network is measured by mean square error MSE (0.0001), and the calculation is as follows:
in the formula, ydpAnd ypRespectively, the expected output and the network output of the P-th sample, where P is the number of training samples.
(4) Ammonia NH3Predicting the concentration;
taking test sample data as input of the RBF neural network prediction model, wherein the output of the model is ammonia NH3The predicted result of concentration. The prediction accuracy is quantitatively evaluated by using the root mean square error RMSE and the average percentage error, and the calculation is as follows:
in the formula, ydmAnd ymRespectively, the expected output and the network output of the mth sample, wherein M is the number of the test samples.
The invention has the following obvious advantages and beneficial effects:
1 the invention establishes stable and effective ammonia NH based on good nonlinear mapping capability of RBF neural network3The concentration prediction model can realize accurate prediction of the ammonia concentration in the future 60 seconds, and has important significance on ammonia emission control in the process of collecting domestic garbage.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of a RBF neural network architecture;
FIG. 3 is a test result diagram of a prediction method;
FIG. 4 is a test error graph of a prediction method; .
Detailed Description
The method comprises the steps of establishing an RBF neural network model for ammonia concentration prediction by utilizing a training data set; and verifying the accuracy of the future ammonia concentration predicted value output by the RBF neural network prediction model by using the test data set.
As an example, the effectiveness of the method provided by the invention is verified by using data from a certain garbage classification post in Beijing, 3000 groups of data from 6 month 15 days to 6 month 16 days in 2021 are selected for experiments, wherein the front 2100 groups of data are used as training data, and the rest 900 groups of data are used as test data.
(1) Based on 2100 training data, the concentrations of ammonia NH3 at t-3, t-2, t-1 and t are selected as model input, and the concentration of ammonia NH3 at t +23The concentration is used as model output, and an RBF neural network prediction model is established;
(2) for 900 groups of test data, ammonia NH is carried out through an RBF neural network prediction model3Concentration prediction, the prediction results are shown in fig. 3, X-axis: number of samples tested, in units of units per sample, Y-axis: ammonia NH3Concentration value in ppm, black line is ammonia NH3Actual output value of concentration, and black points are predicted values of the RBF neural network; prediction error as shown in fig. 4, X-axis: number of samples tested, in units of units per sample, Y-axis: ammonia NH3Concentration prediction error in ppm;
(3) the prediction accuracy was quantitatively evaluated using the root mean square error RMSE and the average percent error, with RMSE being 0.0269 and MAPE being 2.8661%.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that modifications and variations can be made without departing from the spirit and scope of the invention as defined by the appended claims.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种基于萤火虫算法的海上升压站选址方法