Vehicle base short-term composite prediction method based on prediction model

文档序号:8185 发布日期:2021-09-17 浏览:31次 中文

1. The vehicle base short-term composite prediction method based on the prediction model is characterized by comprising the following steps: the method comprises the following steps: step 1, constructing a prediction model, and step 2, predicting the heat load and the electric load of a vehicle base by using the prediction model; and 3, carrying out a simulation experiment to verify the accuracy of the prediction model.

2. The predictive model-based vehicle base short-term composite prediction method of claim 1, wherein: the prediction model is a seasonal autoregressive moving average model SARIMA, and the construction and prediction method comprises the following steps:

step 1, judging whether the sequence is a stable sequence or not by carrying out ADF (adaptive ADF) inspection and autocorrelation inspection results;

step 2, if the sequence is not stable, performing trend removing and differential processing to enable the sequence to be a stable sequence, otherwise, not performing differential processing;

step 3, determining the order of the model, primarily estimating the parameters of the model through the autocorrelation and partial autocorrelation graphs, and accurately determining the order of the model according to the evaluation index of the model; carrying out significance inspection on the model after order fixing, verifying whether residual error is a white noise sequence, if so, predicting by using the model constructed in the step 4-the step 6, and if not, carrying out accurate order fixing on the model again to generate a new model after order fixing until the significance inspection is passed;

step 4, constructing an autoregressive moving average model ARMA, wherein the autoregressive moving average model is a mixed model of the autoregressive model AR and the moving average model MA; the autoregressive moving average model is represented as ARMA (p, q), expression:

xt=φ1xt-12xt-2+…+φpxt-p+ut1ut-12ut-2+…+φqut-q

wherein p is the autoregressive order; q is the moving average order; x is the number oft,xt-1,xt-2,...,xt-pValues at different time points; phi is a1,φ2,...,φpIs an autoregressive coefficient; phi is a1,φ2,...,φqRepresenting a moving regression coefficient; u. oftRepresenting whiteNoise, which is a random fluctuation of values in a time series;

step 5, the ARMA model can only be used for a stationary time sequence and cannot generate optimistic stationary data, and the data mean value and the variance of the ARMA model can fluctuate along with time; therefore, the obtained non-stationary data must be subjected to a difference process, that is, the time-series data is subtracted by its own lag value to generate a difference time-series, and a first-order operator is defined as:

xt-xt-1=(1-B)xt

wherein B is a hysteresis operator, Byt=yt-1(ii) a Whereas for seasonal data, a seasonal difference operator is used:

xt-xt-S=(1-BS)xt

wherein S represents a period of seasonal data;

step 6, SARIMA model is also called SARIMA (P, D, Q) (P, D, Q)SThe model respectively adopts D-order trend difference and D-order seasonal difference with the period of S to eliminate the trend and the periodicity in the time sequence so as to obtain a stable time sequence zt,ztExpressed as: z is a radical oft=(1-B)d(1-BS)Dxt

Stationary sequence ztThen ARMA plateau sequence modeling can continue with the expression:

wherein the content of the first and second substances,andrespectively representing non-seasonal autoregressive polynomials and non-seasonal moving average polynomials, wherein p and q are the orders of the two polynomials respectively, and d is the order of the trend difference; phiP(BS)=(1-Φ1BS2B2S-…-ΦPBPS) And ΘQ(BS)=(1-Θ1BS2B2S-…-ΘQBQS) The order of the two polynomials is P and Q, D represents the order of the seasonal difference, and S is the time series period.

3. The predictive model-based vehicle base short-term composite prediction method of claim 1, wherein: the prediction model is an LSTM neural network model, and the construction and prediction method comprises the following steps:

step 1, the LSTM neural network model consists of four main elements: an input gate, a forgetting gate, an output gate and a unit state; let the input sequence be (x)1,x2,...,xt) The hidden layer state is (h)1,h2,...,ht) Then at time t there is:

ft=σ(Whf·[ht-1,xt]+bf)

it=σ(Wi·[ht-1,xt]+bi)

Ct=ft·Ct-1+it·tanh(WC·[ht-1,xt]+bC)

ot=σ(Wo·[ht-1,xt]+bo)

ht=ot·tanh(Ct)

in the formula: f. oft,it,otRespectively a forgetting gate, an input gate and an output gate, CtRepresenting the state of the cell, wherein sigma and tanh are sigmoid and tanh activation functions respectively, and W and b represent weight and a deviation matrix respectively;

step 2, the prediction process of the LSTM neural network model is as follows:

step 1a, determining input and output data of a model, and carrying out normalization processing on the input and output data;

step 1b, dividing a data set into a training set and a testing set, determining input and output layer nodes of a model according to input and output data dimensions, initializing hidden layer nodes, dropout coefficients, error required precision, input gate, forgetting gate and output gate parameters of the model, and initializing each weight of the network by utilizing uniform distribution;

step 1c, fitting the LSTM model, performing forward propagation through each gate, performing error backward propagation by taking root mean square error RMSE as a target function and BPTT as an inner core, updating the weight of each gate, and continuously iterating until the maximum iteration times is reached or the error meets the requirement, and terminating the training of the model;

and step 1d, calculating an error index, performing inverse normalization processing on the test fitting value, comparing the test fitting value with a true value, and calculating a Mean Absolute Error (MAE) and a Mean Absolute Percentage Error (MAPE).

4. The predictive model-based vehicle base short-term composite prediction method of claim 1, wherein: the prediction model is a mixed model and comprises a SARIMA-SADE-SVR model and a SARIMA-LSTM model.

5. The predictive model-based vehicle base short-term composite prediction method of claim 4, wherein: the method for constructing and predicting the SARIMA-SADE-SVR model comprises the following steps:

step 1, respectively establishing SARIMA and SADE-SVR prediction models based on vehicle base load data xiFitting is carried out to respectively obtain fitting values xi (1)And xi (2)

Step 2, constructing an optimized objective function fitness, and solving omega by using an SADE algorithm1And omega2

Wherein n is the number of data to be fitted and the constraint condition isω12=1;

Step 3, predicting by using two models of SARIMA and SADE-SVR to respectively obtain ym (1)And ym (2)Using the obtained omega1And omega2And carrying out weighted summation on the prediction result to obtain a final prediction result.

6. The predictive model-based vehicle base short-term composite prediction method of claim 4, wherein: the method for constructing and predicting the SARIMA-LSTM model comprises the following steps:

step 1, modeling original load data of a vehicle base by using a SARIMA model to obtain a linear component LtAnd residual et

Step 2, fitting the nonlinear residual error e obtained in the first step by using an LSTM modeltAnd predicting to obtain the nonlinear component predicted value Nt

And 3, overlapping the two prediction results.

Background

In a certain area, the cooling, heating and power loads are generally different due to the influence of various factors at different times. At the same time, however, because of the strong inertia of the regional systems, they are not independent of each other, and the load at the present time is always related to the past load. The user's demand for load, within a short time, often appears as a random fluctuation over the past load basis. It can be said that the load at the next predicted moment is basically determined by the past load, and its stochastic variation should be taken into account. Based on such analysis, it would be an effective method to predict the load by using a time series analysis method. Considering that the regional load has a certain daily periodicity, the invention predicts the vehicle base load based on a seasonal autoregressive moving average (SARIMA) model.

Disclosure of Invention

The invention aims to solve the technical problem of providing a vehicle base short-term composite prediction method based on a prediction model for predicting the load of a vehicle base aiming at the defects of the background art.

The invention adopts the following technical scheme for solving the technical problems:

the vehicle base short-term composite prediction method based on the prediction model comprises the following steps: step 1, constructing a prediction model, and step 2, predicting the heat load and the electric load of a vehicle base by using the prediction model; and 3, carrying out a simulation experiment to verify the accuracy of the prediction model.

Further, the prediction model is a seasonal autoregressive moving average model SARIMA, and the construction and prediction method comprises the following steps:

step 1, judging whether the sequence is a stable sequence or not by carrying out ADF (adaptive ADF) inspection and autocorrelation inspection results;

step 2, if the sequence is not stable, performing trend removing and differential processing to enable the sequence to be a stable sequence, otherwise, not performing differential processing;

step 3, determining the order of the model, primarily estimating the parameters of the model through the autocorrelation and partial autocorrelation graphs, and accurately determining the order of the model according to the evaluation index of the model; carrying out significance inspection on the model after order fixing, verifying whether residual error is a white noise sequence, if so, predicting by using the model constructed in the step 4-the step 6, and if not, carrying out accurate order fixing on the model again to generate a new model after order fixing until the significance inspection is passed;

step 4, constructing an autoregressive moving average model ARMA, wherein the autoregressive moving average model is a mixed model of the autoregressive model AR and the moving average model MA; the autoregressive moving average model is represented as ARMA (p, q), expression:

xt=φ1xt-12xt-2+...+φpxt-p+ut1ut-12ut-2+...+φqut-q

wherein p is the autoregressive order; q is the moving average order; x is the number oft,xt-1,xt-2,...,xt-pValues at different time points; phi is a1,φ2,...,φpIs an autoregressive coefficient; phi is a1,φ2,...,φqRepresenting a moving regression coefficient; u. oftRepresenting white noise, which is a random fluctuation of values in a time series;

step 5, the ARMA model can only be used for a stationary time sequence and cannot generate optimistic stationary data, and the data mean value and the variance of the ARMA model can fluctuate along with time; therefore, the obtained non-stationary data must be subjected to a difference process, that is, the time-series data is subtracted by its own lag value to generate a difference time-series, and a first-order operator is defined as:

xt-xt-1=(1-B)xt

wherein B is a hysteresis operator, Byt=yt-1(ii) a Whereas for seasonal data, a seasonal difference operator is used:

xt-xt-S=(1-BS)xt

wherein S represents a period of seasonal data;

step 6, SARIMA model is also called SARIMA (P, D, Q) (P, D, Q)sThe model adopts a D-order trend difference and a D-order seasonal difference with the period of S to eliminate the trend in the time seriesAnd periodicity to obtain a smooth time series zt,ztExpressed as: z is a radical oft=(1-B)d(1-BS)Dxt

Stationary sequence ztThen ARMA plateau sequence modeling can continue with the expression:

wherein the content of the first and second substances,and thetaq(B)=(1-θ1B-θ2B2-…-θqBq) Respectively representing non-seasonal autoregressive polynomials and non-seasonal moving average polynomials, wherein p and q are the orders of the two polynomials respectively, and d is the order of the trend difference; phiP(Bs)=(1-Φ1BS2B2s-…-ΦPBPS) And ΘQ(BS)=(1-Θ1BS2B2S-…-ΘQBQS) The order of the two polynomials is P and Q, D represents the order of the seasonal difference, and S is the time series period.

Further, the prediction model is an LSTM neural network model, and the construction and prediction method comprises the following steps:

step 1, the LSTM neural network model consists of four main elements: an input gate, a forgetting gate, an output gate and a unit state; let the input sequence be (x)1,x2,...,xt) The hidden layer state is (h)1,h2,...,ht) Then at time t there is:

ft=σ(Whf·[ht-1,xt]+bf)

it=σ(Wi·[ht-1,xt]+bi)

Ct=ft·Ct-1+it·tanh(WC·[ht-1,xt]+bC)

ot=σ(Wo·[ht-1,xt]+bo)

ht=ot·tanh(Ct)

in the formula: f. oft,it,otRespectively a forgetting gate, an input gate and an output gate, CtRepresenting the state of the cell, wherein sigma and tanh are sigmoid and tanh activation functions respectively, and W and b represent weight and a deviation matrix respectively;

step 2, the prediction process of the LSTM neural network model is as follows:

step 1a, determining input and output data of a model, and carrying out normalization processing on the input and output data;

step 1b, dividing a data set into a training set and a testing set, determining input and output layer nodes of a model according to input and output data dimensions, initializing hidden layer nodes, dropout coefficients, error required precision, input gate, forgetting gate and output gate parameters of the model, and initializing each weight of the network by utilizing uniform distribution;

step 1c, fitting the LSTM model, performing forward propagation through each gate, performing error backward propagation by taking root mean square error RMSE as a target function and BPTT as an inner core, updating the weight of each gate, and continuously iterating until the maximum iteration times is reached or the error meets the requirement, and terminating the training of the model;

and step 1d, calculating an error index, performing inverse normalization processing on the test fitting value, comparing the test fitting value with a true value, and calculating a Mean Absolute Error (MAE) and a Mean Absolute Percentage Error (MAPE).

Further, the prediction model is a mixed model, including SARIMA-SADE-SVR model and SARIMA-LSTM model.

Further, the method for constructing and predicting the SARIMA-SADE-SVR model comprises the following steps:

step 1, respectively establishing SARIMA and SADE-SVR prediction models based on vehiclesBase load data xiFitting is carried out to respectively obtain fitting values xi (1)And xi (2)

Step 2, constructing an optimized objective function fitness, and solving omega by using an SADE algorithm1And omega2

Where n is the number of data to be fitted and the constraint is ω12=1。

Step 3, predicting by using two models of SARIMA and SADE-SVR to respectively obtain ym (1)And ym (2)Using the obtained omega1And omega2And carrying out weighted summation on the prediction result to obtain a final prediction result.

Further, the method for constructing and predicting the SARIMA-LSTM model comprises the following steps:

step 1, modeling original load data of a vehicle base by using a SARIMA model to obtain a linear component LtAnd residual et

Step 2, fitting the nonlinear residual error e obtained in the first step by using an LSTM modeltAnd predicting to obtain the nonlinear component predicted value Nt

And 3, overlapping the two prediction results.

Compared with the prior art, the invention adopting the technical scheme has the following technical effects:

aiming at the problems that the SADE-SVR model only considers the relation between the influence factors and the vehicle base load data and does not consider the continuity of the time sequence, the invention adopts the SARIMA model and the LSTM model to predict the cooling, heating and power loads of the vehicle base. The load is firstly predicted by adopting a classical SARIMA model and a classical LSTM model, and the accuracy of the SARIMA model and the LSTM model is higher than that of the SADE-SVR model in the load prediction of 1h ahead by examples, and is lower than that of the SADE-SVR model in the load prediction of 24h ahead by examples. Aiming at the characteristic that the load data has linear components and nonlinear components, the combination of a linear model and a nonlinear model is considered for prediction: (1) establishing a SARIMA-SADE-SVR mixed model in a parallel connection mode; (2) the series connection mode is used for establishing a SARIMA-LSTM mixed model. The prediction result shows that: in the load prediction of 1h ahead, the prediction precision of the SARIMA-LSTM model is highest; in the advanced 24h load prediction, the SADE-SVR model has higher prediction accuracy than the SARIMA-SADE-SVR mixed model.

Drawings

FIG. 1 is a flow chart of load prediction for SARIMA;

FIG. 2 is a graph of raw data autocorrelation and partial autocorrelation;

FIG. 3 is a graph of autocorrelation and partial autocorrelation after differentiation of raw data;

FIG. 4 is a diagram of model residual inspection;

FIG. 5a is a diagram of the results of different scale cooling load predictions for SARIMA;

FIG. 5b is a diagram of SARIMA different scale cold load prediction error;

FIG. 5c is a graph of the results of SARIMA different scale thermal load predictions;

FIG. 5d is a graph of SARIMA thermal load prediction error at different scales;

FIG. 5e is a diagram of the electrical load prediction results of SARIMA at different scales;

FIG. 5f SARIMA different scale electrical load prediction error map;

FIG. 6 is a schematic diagram of a network architecture of the RNN prediction method;

FIG. 7 is a diagram of the structure of LSTMs;

FIG. 8 is a flow chart of LSTM load prediction;

in FIG. 9, a is a diagram of the prediction results of different scales of the cooling load of the LSTM;

in FIG. 9, b is a diagram of the prediction error of the cooling load of LSTM at different scales;

in FIG. 9, c is a diagram of the predicted results of different scales of thermal loads of LSTM;

in FIG. 9, d is a chart of the prediction error of the thermal load of LSTM in different scales;

in fig. 9, e is a diagram of the predicted results of the electrical loads of the LSTM in different scales;

in FIG. 9, f is an LSTM electrical load prediction error graph in different scales;

FIG. 10 is a diagram of a SARIMA-SADE-SVR mixture model structure;

FIG. 11 is a flow chart of modeling a conventional hybrid model;

in FIG. 12, a is a comprehensive comparison chart of the cold load prediction results;

FIG. 12 b is a comprehensive comparison graph of the prediction error of the cooling load;

in FIG. 12, c is a comprehensive comparison chart of the heat load prediction results;

FIG. 12 d is a comprehensive comparison graph of thermal load prediction errors;

in fig. 12, e is a comprehensive comparison graph of the electric load prediction results;

in fig. 12, f is a comprehensive comparison graph of the prediction error of the electrical load.

Detailed Description

The technical scheme of the invention is further explained in detail by combining the attached drawings:

in the description of the present invention, it is to be understood that the terms "left side", "right side", "upper part", "lower part", etc., indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and that "first", "second", etc., do not represent an important degree of the component parts, and thus are not to be construed as limiting the present invention. The specific dimensions used in the present example are only for illustrating the technical solution and do not limit the scope of protection of the present invention.

The vehicle base short-term composite prediction method based on the prediction model comprises the following steps: step 1, constructing a prediction model, and step 2, predicting the heat load and the electric load of a vehicle base by using the prediction model; and 3, carrying out a simulation experiment to verify the accuracy of the prediction model.

This example provides three prediction models, namely, a seasonal autoregressive moving average model SARIMA, an LSTM neural network model, and a hybrid model (classified into a SARIMA-SADE-SVR model and a SARIMA-LSTM model).

Seasonal autoregressive moving average model SARIMA as follows, before describing SARIMA, it is first necessary to describe an autoregressive moving average model (ARMA) which is a mixed model of an autoregressive model (AR) and a moving average Model (MA). The ar (p) model aims to model the current observations from the previous behavior of a given process, and uses their autocorrelation to build a regression equation containing the previous and current behaviors for prediction purposes. The ma (q) model is a linear regression of the current value of the time series with the previous process error values, as the current value is independent of past time point entries, and the fluctuation term with respect to past time is a weighted sum of a white noise series.

The prediction model is a seasonal autoregressive moving average model SARIMA, as shown in FIGS. 1 to 5, and the construction and prediction method comprises the following steps:

step 1, performing ADF (ADF inspection: unit root inspection, and analyzed Dickey-filler test) inspection and autocorrelation inspection to judge whether the sequence is a stable sequence;

step 2, if the sequence is not stable, performing trend removing and differential processing to enable the sequence to be a stable sequence, otherwise, not performing differential processing;

step 3, determining the order of the model, primarily estimating the parameters of the model through the autocorrelation and partial autocorrelation graphs, and accurately determining the order of the model according to the evaluation index of the model; carrying out significance inspection on the model after order fixing, verifying whether residual error is a white noise sequence, if so, predicting by using the model constructed in the step 4-the step 6, and if not, carrying out accurate order fixing on the model again to generate a new model after order fixing until the significance inspection is passed;

step 4, constructing an autoregressive moving average model ARMA, wherein the autoregressive moving average model is a mixed model of the autoregressive model AR and the moving average model MA; the autoregressive moving average model is represented as ARMA (p, q), expression:

xt=φ1xt-12xt-2+...+φpxt-p+ut1ut-12ut-2+...+φqut-q

whereinP is the autoregressive order; q is the moving average order; x is the number oft,xt-1,xt-2,...,xt-pValues at different time points; phi is a1,φ2,...,φpIs an autoregressive coefficient; phi is a1,φ2,...,φqRepresenting a moving regression coefficient; u. oftRepresenting white noise, which is a random fluctuation of values in a time series;

step 5, the ARMA model can only be used for a stationary time sequence and cannot generate optimistic stationary data, and the data mean value and the variance of the ARMA model can fluctuate along with time; therefore, the obtained non-stationary data must be subjected to a difference process, that is, the time-series data is subtracted by its own lag value to generate a difference time-series, and a first-order operator is defined as:

xt-xt-1=(1-B)xt

wherein B is a hysteresis operator, Byt=yt-1(ii) a Whereas for seasonal data, a seasonal difference operator is used:

xt-xt-S=(1-BS)xt

wherein S represents a period of seasonal data;

step 6, SARIMA model is also called SARIMA (P, D, Q) (P, D, Q)SThe model respectively adopts D-order trend difference and D-order seasonal difference with the period of S to eliminate the trend and the periodicity in the time sequence so as to obtain a stable time sequence zt,ztExpressed as: z is a radical oft=(1-B)d(1-BS)Dxt

Stationary sequence ztThen ARMA plateau sequence modeling can continue with the expression:

wherein the content of the first and second substances,and thetaq(B)=(1-θ1B-θ2B2-…-θqBq) Respectively representing non-seasonal autoregressive polynomials and non-seasonal moving average polynomials, wherein p and q are the orders of the two polynomials respectively, and d is the order of the trend difference; phiP(BS)=(1-Φ1BS2B2S-…-ΦPBPS) And ΘQ(BS)=(1-Θ1BS2B2S-…-ΘQBQS) The order of the two polynomials is P and Q, D represents the order of the seasonal difference, and S is the time series period.

Taking the cold load from 7 months 1 to 7 months 31 days of the vehicle base as an example, a time sequence with a sequence period of 24 is established, and the stationarity of the time sequence is judged firstly. In the environment of MATLAB2018b, autocorrelation and partial correlation tests were performed on the cold load using autocorr and parcorr functions, respectively. The test results are shown in FIG. 2.

It can be seen that most of the autocorrelation coefficients and the partial autocorrelation coefficients are outside the confidence interval, and the base cold load raw data is a non-stationary time series. The most basic requirement of ARMA modeling is that the sequence is smooth, so the base load needs to be differentially processed. Considering that the base load has a certain daily periodicity, the original data is seasonally differentiated by taking 24h as a period. After a seasonal difference and a non-seasonal difference, the original data has already started to be stable, and the autocorrelation and partial autocorrelation of the differentiated data are shown in the following graph of fig. 3.

In the analysis of the data autocorrelation graph and the partial autocorrelation graph after the difference, the autocorrelation coefficient and the partial autocorrelation coefficient are respectively basically in the confidence interval after the difference, and the autocorrelation coefficient and the partial autocorrelation coefficient are still obviously not 0 when k is 24, k is 48 and k is 96, so the values of P, Q, P and Q are preliminarily considered to be in the range of 1 to 5. Determining by AIC and SC criteria, selecting the model with the smallest AIC and SC values, and finally determining the prediction model of the vehicle base cooling load as SARIMA (1, 1, 1) (1, 1, 3)24

And then, the residual error of the selected model is checked, as shown in fig. 3, it can be seen that the residual error sequence has no correlation and is a white noise sequence, useful information in the vehicle base cold load data can be fully extracted, and the rest is a random disturbance sequence. Thus, it can be determined that the model is feasible.

As shown in FIG. 4, according to the cold load model establishment manner, the thermal load and electric load prediction models of the vehicle base 1 month are respectively SARIMA (1, 0, 1) (1, 1, 1)24And SARIMA (2, 0, 0) (1, 1, 1)24

Simulation experiment

The thermal load and thermal load data to be predicted in the experiment are the same as the previous chapter, and SARIMA (1, 1, 1) (1, 1, 3) is respectively adopted24、SARIMA(1,0,1)(1,1,1)24And SARIMA (2, 0, 0) (1, 1, 1)24The model predicts the cooling, heating and power loads of 24h in 31 days in 1 month, 31 days in 7 months and 31 days in 1 month of the base. And the prediction accuracy of SARIMA was tested at both the 1 h-lead prediction and the 24 h-lead scale. For the SARIMA model, the advanced 24h load prediction is rolling prediction based on single-step prediction, namely the prediction result of each step of the model is added into the prediction model of the next step. The ratio of the load prediction results of the vehicle base SARIMA cooling, heating and power prediction model advanced by 1h and advanced by 24h is shown in fig. 5 and table 1.

TABLE 1 SARIMA model prediction results comparison Table

As can be seen from fig. 5 and table 1, SARIMA has a certain prediction capability, but when data steeply rises and falls, SARIMA does not have good prediction characteristics, and the prediction accuracy of the advance 24h prediction is relatively reduced with the increase of the step size, which is mainly because the advance 24h prediction adopts a recursive method, and the prediction is performed on the basis of the previous prediction with the step size, so that the prediction error thereof has an accumulative effect.

Compared with the SADE-SVR model, the prediction accuracy of the SARIMA model is better than that of the SADE-SVM model in the load prediction of the 1h beyond the period, but the accuracy of the SARIMA model is lower than that of the SADE-SVM model in the load prediction of the 24h ahead. Since the input data of the SVR algorithm used in the embodiment are meteorological factors and time factors which affect the load of the vehicle base, that is, the algorithm mainly excavates the relationship between the load and the influencing factors, the load prediction result of the 24 h-ahead is the same as that of the 1 h-ahead.

Although the SVR model can process the non-linear problem, the model mainly mines the connection between data and does not consider the continuity of load data, and the SARIMA model considers the time sequence of the load but does not have strong non-linear processing capability. The long-short term memory neural network (LSTM) is an effective nonlinear cyclic neural network, and can give consideration to the time sequence and nonlinear relation of data, so that the LSTM is used for predicting the cold and hot loads of a rail vehicle base in the section.

The basic neural network weight can only be passed upwards, and it is not sufficiently expressed for the association between different time steps, so there is a Recurrent Neural Network (RNN). The recurrent neural network is the hopfield network proposed by Saratha Sathasivam in 1982. The primary use of RNNs is in the processing and prediction of sequence data. In the fully-connected feedforward neural network model and the convolutional neural network model, the network structure is from an input layer to a hidden layer to an output layer, the layers are fully connected or partially connected, but nodes between each layer are not connected. Fig. 6 is a schematic network structure diagram of the RNN prediction method. As shown in FIG. 1, xtIs the input of RNN at time t, htIs the RNN hidden state at time t, ytRNN output at time t, U, V, W are the weight parameter matrix of RNN.

For any time t, the hidden state h at that timetIs based on the input x at the current momenttAnd hidden state h of the previous momentt-1And (4) calculating.

The calculation formula is as follows:

ht=f(Uxt+Wht-1+b)

where f is the activation function of the RNN and b is the offset of the linear relationship.

Knowing the hidden state h at the current momenttOutput predicted value y of current time RNNtThe calculation formula of (2) is as follows:

yt=(Vht+C)

the RNN neural network does the same work in a cyclic reciprocating manner, the working mode greatly reduces the number of learning parameters in the network, shortens the network training time and ensures the precision. However, as the time interval increases, the simple recurrent neural network may lose the ability to learn information that is far from the past, i.e., the gradient disappears.

In the middle of the 90 s, german researchers seppheochriter and JuergenSchmidhuber proposed a recurrent network with long and short memory cells (LSTMs), i.e., long and short memory neural networks, as a solution to the problem of vanishing gradients. In addition to the outer recursion, the LSTM has its LSTMs with inner recursion and self-circulation to accumulate information. LSTMs may help preserve errors that can be propagated backwards. By keeping the error more constant, they allow the rotating network to continue with more learning steps (over 1000 steps) and thus link more information remotely. LSTMs contain information beyond the normal circulating network flow in gated cells. Such information may be stored, written to, or read from the unit as data in a computer memory. The cells decide what to store, when to allow reading, writing and deleting by means of open and closed gates (forget gate, input gate, output gate). Unlike digital storage on a computer, these gates are analog signal gates, consisting of a neural network layer of a dot product and a sigmoid activation function, with outputs in the range of 0-1. Analog signals have the advantage of being differentiable over digital signals, and are more suitable for back propagation. These gates act on the signals they receive, similar to the nodes of a neural network, filtering the information using their own set of weights. These weights, like the weights that adjust the input and hidden states, are adjusted through a cyclic net learning process. That is, the cells learn when to allow data to enter, leave, or delete through an iterative process of guessing, back-propagating errors, and gradient descent adjustment weights. The memory cell structure of the LSTM network is shown in fig. 7. σ is an activation function of three gates, typically a Sigmoid function, so that the data has an output value range of [0, 1] after being input into the activation function of three gates.

The Sigmoid function is one of the most common activation functions in neural networks. The Sigmoid function has a value range of [0, 1], so it can map variables between 0 and 1. An especially large positive input Sigmoid function would yield an output that goes to 1, indicating that a neuron is activated in the neural network, and an especially large negative input Sigmoid function would yield an output that goes to 0, indicating that a neuron is not activated in the neural network. That is, regardless of the neuron input, the output range of the neuron is [0, 1] under the action of the Sigmoid activation function. Therefore, when data is transmitted in the network, the situation that the intermediate value is too large to cause the computer to be incapable of processing can not occur.

The LSTM neural network model is constructed and predicted by the following steps:

step 1, the LSTM neural network model consists of four main elements: an input gate, a forgetting gate, an output gate and a unit state; let the input sequence be (x)1,x2,...,xt) The hidden layer state is (h)1,h2,...,ht) Then at time t there is:

ft=σ(Whf·[ht-1,xt]+bf)

it=σ(Wi·[ht-1,xt]+bi)

Ct=ft·Ct-1+it·tanh(WC·[ht-1,xt]+bC)

ot=σ(Wo·[ht-1,xt]+bo)

ht=ot·tanh(Ct)

in the formula: f. oft,it,otRespectively a forgetting gate, an input gate and an output gate, CtRepresents a cell state, and σ and tanh are eachsigmoid and tanh activating functions, wherein W and b respectively represent weight and a deviation matrix;

step 2, the prediction process of the LSTM neural network model is as follows:

step 1a, determining input and output data of a model, and carrying out normalization processing on the input and output data;

step 1b, dividing a data set into a training set and a testing set, determining input and output layer nodes of a model according to input and output data dimensions, initializing hidden layer nodes, dropout coefficients, error required precision, input gate, forgetting gate and output gate parameters of the model, and initializing each weight of the network by utilizing uniform distribution;

step 1c, fitting the LSTM model, performing forward propagation through each gate, performing error backward propagation by taking root mean square error RMSE as a target function and BPTT as an inner core, updating the weight of each gate, and continuously iterating until the maximum iteration times is reached or the error meets the requirement, and terminating the training of the model;

and step 1d, calculating an error index, performing inverse normalization processing on the test fitting value, comparing the test fitting value with a true value, and calculating a Mean Absolute Error (MAE) and a Mean Absolute Percentage Error (MAPE).

Fig. 8 is a main modeling flow of LSTM load prediction, in which iterator refers to the number of loop iterations in one round during training, and epoch represents the total number of loops set for training the model.

Simulation experiment

The experimental data are the same as the data in chapter iv, and the cooling, heating and power loads in months 7, 1 and 1 in the vehicle base are predicted respectively. And respectively taking the historical data of the cooling, heating and power loads of the base and relevant influence factors as input for prediction, and using the load data of the last day of each month for inspection. The iteration number of the LSTM model is set to be 120, the number of nodes of the hidden layer of the LSTM is set to be 4, and the prediction result is shown in the following figure 9.

TABLE 2 comparison of LSTM model prediction results

From fig. 9 and table 2 we can see that the LSTM model also has good prediction power, compared to the SARIMA model, the model predicts data closer to the original data. However, the model has a higher error than the SARIMA model, for example, in cold load prediction, the original load data is very flat from 5 th to 8 th, but the LSTM model does not show good prediction capability, and the prediction data has a large deviation. Furthermore, the LSTM model also has a phenomenon in which the prediction error increases in the load prediction for 24h ahead, which is also related to the accumulation of the error using the rolling prediction in the LSTM long-time scale prediction.

Compared with the SADE-SVR model, the LSTM model also has better prediction accuracy in the load prediction of the overdue 1h and is superior to the SARIMA model. However, in the 24 h-ahead load prediction, the prediction accuracy of LSTM is higher than that of SARIMA model, but still lower than that of SADE-SVR model.

Mixed models, including SARIMA-SADE-SVR models and SARIMA-LSTM models.

A large number of experiments and researches show that the single model has good prediction effect on processing the time series problem with single component. However, in real life, the load data is often not single in component, including both linear and nonlinear components, and in this case, it is often not enough to use only a single model for prediction. Therefore, it is considered that the linear model SARIMA is mixed with the SADE-SVR model and the LSTM model having a high processing non-linear ability, respectively, and then prediction is performed.

The hybrid model is generally classified into a series type and a parallel type. The series connection type is divided into two cases, wherein one case is that the output of one model is used as the input of the other model; in another case, the residual error obtained by fitting one model is used as the input of another model, and the prediction result of the first model is combined with the residual error prediction result of the second model to obtain the final prediction result. And the parallel combined prediction is to distribute weights to the two models, and add the weights of the prediction results of the two models to obtain the final prediction result.

Considering that this chapter uses a combination of a linear model and a nonlinear model, it makes no sense that an output result of a linear model that can process only linear components is reprocessed with a nonlinear model, whereas an output result of a nonlinear model that contains nonlinear components is difficult to process. Therefore, the first tandem type hybrid model is not applicable to the experiments herein.

In addition, for the SARIMA-SADE-SVR hybrid model, considering that the SADE-SVR model used herein is input as a vehicle base load influence factor, the relation between the influence factor and the load is used for prediction, and the series-type second hybrid model cannot ensure that the residual error of the SARIMA model still keeps correlation with the influence factor, so that the SARIMA-SADE-SVR hybrid model is predicted by using a parallel-type hybrid mode.

For the SARIMA-LSTM model, the load prediction of 24h ahead needs to be researched in consideration of experiments, for the two time series type prediction models, a single-step prediction method is generally used for a training model, and a long-time series is difficult to train by using 24h ahead, so that the prediction is carried out by using a series-connection type second mixed mode.

Referring to fig. 10, the method steps for constructing and predicting the SARIMA-SADE-SVR model include:

step 1, respectively establishing SARIMA and SADE-SVR prediction models based on vehicle base load data xiFitting is carried out to respectively obtain fitting values xi (1)And xi (2)

Step 2, constructing an optimized objective function fit, and solving omega by using an SADE algorithm1And omega2

Where n is the number of data to be fitted and the constraint is ω12=1。

Step 3, predicting by using two models of SARIMA and SADE-SVR to respectively obtain ym (1)And ym (2)Using the obtained omega1And omega2And carrying out weighted summation on the prediction result to obtain a final prediction result.

Referring to fig. 11, the method for constructing and predicting the SARIMA-LSTM model includes the steps of:

step 1, modeling original load data of a vehicle base by using a SARIMA model to obtain a linear component LtAnd residual et

Step 2, fitting the nonlinear residual error e obtained in the first step by using an LSTM modeltAnd predicting to obtain the nonlinear component predicted value Nt

And 3, overlapping the two prediction results.

Simulation experiment and comparative analysis

The SARIMA-SADE-SVR and SARIMA-LSTM models are respectively used for training and predicting the cooling, heating and power load data of the vehicle base in 7 months, 1 month and 1 month, and the prediction results are shown in the following figure 12 and table 3. And compared with the previously obtained SADE-SVR, SARIMA and LSTM prediction results, and the comparison results are also shown in FIG. 12 and Table 3.

TABLE 3 comparison of the comprehensive indices

From the above chart, it can be seen that in the advance 1h load prediction, the hybrid model improves the prediction accuracy on the basis of the original model, wherein the SARIMA-LSTM model has the highest prediction accuracy in the cold, heat and power advance 1h load prediction. For the advanced 24h load prediction, the prediction accuracy of the SADE-SVR model is still the highest, and the hybrid model only improves the prediction accuracy of the time series prediction model and makes up for the accumulated error generated by part of rolling prediction. Therefore, in practice, SARIMA-LSTM can be used for load advance by 1h, and SADE-SVR model can be used for predicting load advance by 24 h.

Has the advantages that:

aiming at the problems that the SADE-SVR model only considers the relation between the influence factors and the vehicle base load data and does not consider the continuity of the time sequence, the invention adopts the SARIMA model and the LSTM model to predict the cooling, heating and power loads of the vehicle base. The load is firstly predicted by adopting a classical SARIMA model and a classical LSTM model, and the accuracy of the SARIMA model and the LSTM model is higher than that of the SADE-SVR model in the load prediction of 1h ahead by examples, and is lower than that of the SADE-SVR model in the load prediction of 24h ahead by examples. Aiming at the characteristic that the load data has linear components and nonlinear components, the combination of a linear model and a nonlinear model is considered for prediction: (1) establishing a SARIMA-SADE-SVR mixed model in a parallel connection mode; (2) the series connection mode is used for establishing a SARIMA-LSTM mixed model. The prediction result shows that: in the load prediction of 1h ahead, the prediction precision of the SARIMA-LSTM model is highest; in the advanced 24h load prediction, the SADE-SVR model has higher prediction accuracy than the SARIMA-SADE-SVR mixed model.

It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention. While the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于遗传算法和离散元分析法的锚固边坡安全性评价方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类