Unit combination calculation method combined with deep learning
1. A unit combination calculation method combined with deep learning is characterized by comprising the following steps:
step 1: preprocessing a large amount of historical data, specifically, selecting a MinMaxScaler method to realize the normalization of the historical data, and dividing the historical data by using a K-means clustering algorithm;
the MinMaxScaler normalization method is characterized by being shown in a formula (1):
in the formula (I), the compound is shown in the specification,is a normalized value; x is data to be processed, namely sample data; xmaxAnd XminThe maximum and minimum values of the sample data, respectively.
The K-means clustering algorithm is a loop iteration algorithm and is realized by the following 4 steps:
(1) selecting K initial central points as initial clustering centers a1,a2,…,ak;
(2) Calculate each sample xiThe distances to the K clustering centers are divided into the classes with the minimum distances;
(3) calculating the mean value of all sample points in the K classes to be used as a clustering center of the next iteration;
(4) repeating the steps (2) and (3) until the clustering center is not changed any more or the iteration times are reached;
step 2: building a deep learning model, inputting the prediction data of the load to obtain the start-stop state of the unit, wherein the input of the deep learning model is the prediction data of the load, and the output is the start-stop state of the unit;
and step 3: substituting the prediction data of the start-stop state and the load of the unit into a unit combination optimization model to obtain a unit output value; the unit combination optimization model is used for constructing an objective function by taking the minimum power generation cost of the thermal power generating unit as a target; and taking system node power balance constraint, thermal power unit output upper and lower limit constraint, thermal power unit climbing constraint, thermal power unit startup and shutdown time constraint and power transmission line tide constraint as constraint conditions.
2. The method for calculating the unit combination in combination with deep learning of claim 1, wherein the deep learning model comprises three parts, namely an LSTM neural network layer, a Dropout layer and a full connection layer, and the input of the deep learning model is prediction data of load and the output of the deep learning model is a unit start-stop state;
the LSTM neural network layer is based on an RNN improved structure model, a storage unit and a door mechanism are introduced, and information in sentences at the previous moment can be utilized at the current moment; the LSTM neural network consists of 4 partsConsists of the following components: namely input gate itAnd an output gate otForgetting door ftAnd memory cell Ct;
In the Dropout layer, Dropout means that in the training process of deep learning, a certain proportion of neurons are randomly disconnected each time parameters are updated, and the dependence of the neurons on certain local characteristics is weakened, so that the generalization capability of the model is improved;
the fully connected layer uses a softmax excitation function as the multi-layer perception of the output layer; the full connection means that all neurons of the current layer and the previous layer of network are connected with each other, and the extracted high-dimensional features are subjected to dimensionality reduction; the layer; and the number of units at the last layer is the same as the number of the classifications at the tail end of the network, and the units are matched with the softmax activation function for use so as to realize the classification of the output features.
3. The method as claimed in claim 2, wherein the input gate i is a unit combination calculation method combined with deep learningtAnd an output gate otForgetting door ftAnd memory cell Ct
The forgetting door ftDetermining whether to retain the memory cell CtThe output of the previous information is expressed as shown in equation (1-1):
ft=σ(Wfhht-1+Wfxxt+bf) (1-1),
in the formula, ht-1The output of the hidden layer at the time t-1; x is the number oftInput for the current time; sigma is a sigmoid activation function; wfhTo forget the door ht-1The weight of (2); wfxX for forgetting to open doortThe weight of (c); bfA bias parameter for a forgetting gate;
the input gate itAnd memory cell CtThe state update of (1) is shown as the formulas (1-2), (1-3) and (1-4):
it=σ(Wihht-1+Wixxt+bi) (1-2),
in the formula, WihIs h of the input gatet-1The weight of (2); wixIs x of the input gatetThe weight of (c); biIs the bias parameter of the input gate;is the state of the memory cell to be updated; tan h is to produceAn activation function of; wchIs composed ofH oft-1The weight of (2); wcxIs composed ofX oftThe weight of (c); bcIs composed ofThe bias parameter of (1). CtAnd Ct-1Memory cell states at times t and t-1;
when the LSTM neural network layer updates the memory cell CtThen, the specific expression of the output state is shown in formulas (1-5) and (1-6):
ot=σ(Wohht-1+Woxxt+bo) (1-5),
ht=ot·tanh(Ct) (1-6),
in the formula, WohH being output gatest-1The weight of (2); woxX being output gatestThe weight of (c); boIs the offset parameter of the output gate; h istThe output of the layer is hidden for the current moment.
4. The method for calculating the unit combination in combination with the deep learning of claim 3, wherein the training process of the deep learning model comprises the following steps:
training the depth model by adopting an Adam algorithm, wherein the training process comprises a forward propagation stage and a backward propagation stage; firstly, calculating products of input signals and corresponding weights thereof in forward propagation, and then, acting an activation function on the sum of the products to obtain an error formed by an output result and a true value; then, the formed related error is transmitted back in the backward propagation process of the network, the weight W and the bias b are updated according to the gradient descent method for the gradient of each parameter by calculating the loss function, and the formula (2) and the formula (3) are shown in the specification
In the formula (I), the compound is shown in the specification,for learning rate, L (W, b) is the loss function;
selecting the cross entropy error as a loss function, wherein a calculation formula is shown as a formula (4):
in the formula, yiIs the actual label of the sample and,to predictThe value is obtained.
5. The deep learning-combined unit combination calculation method according to claim 4, wherein in the deep learning model training process, parameters are adjusted by using an Adam algorithm, a mean square error is selected as a loss function, and a calculation formula is shown as a formula (5):
in the formula, yiIs the actual value of the data sample,is a prediction value of the data sample.
6. The method for calculating the unit combination in combination with the deep learning according to claim 1, wherein the objective function constructed by aiming at the minimum power generation cost of the thermal power unit in the step 3 is characterized as shown in a formula (6);
the system node power balance constraint, the thermal power unit output upper and lower limit constraint, the thermal power unit climbing constraint, the thermal power unit startup and shutdown time constraint and the power transmission line tide constraint are respectively represented as formulas (7) to (11):
ug,tPg,min≤Pg,t≤ug,tPg,max (8),
wherein, Pg,tFor thermal power unit output, C (P)g,t) Representing the coal consumption cost of the unit, C (P)g,t) Is a quadratic function, which needs to be processed by piecewise linearization; SUit、SDitThe start-stop cost of the unit is calculated; u. ofg,tAnd representing the running state of the unit. Pd,tSystem load for time period t; pg,maxAnd Pg,minRespectively the upper limit and the lower limit of the output of the conventional unit; x is the number ofmnIs the reactance value of the line mn; xon,g,t、Xoff,g,tRepresenting the time that the unit has been continuously started and stopped; t ison,g、Toff,gRepresenting the limitation of the startup and shutdown time of the unit; URiAnd DRiLimiting up and down climbing; PLl,tIs the transmission power of the transmission line;the maximum active transmission capacity of the transmission line; thetamtIs the phase angle of node m.
Background
The problem of unit combination is a high-dimensional, non-convex, discrete and nonlinear mixed integer optimization problem, belongs to an NP-hard problem, and is difficult to find out a theoretical optimal solution. The solution method for the unit combination problem mainly includes three methods: (1) a heuristic algorithm; (2) mathematical optimization methods such as dynamic programming, interior point methods, branch and bound methods, etc.; (3) and intelligent optimization algorithms such as genetic algorithm, particle swarm algorithm and the like. The methods are strong in theory and strict in logic derivation, but the solving process is complex and takes long time. And the data-driven theoretical method can learn the mapping relation from a large amount of historical data through a deep learning model, so that the solving process is greatly simplified.
Disclosure of Invention
The invention provides a unit combination calculation method combined with deep learning, which is characterized by comprising the following steps of:
step 1: preprocessing a large amount of historical data, specifically, selecting a MinMaxScaler method to realize the normalization of the historical data, and dividing the historical data by using a K-means clustering algorithm;
the MinMaxScaler normalization method is characterized by being shown in a formula (1):
in the formula (I), the compound is shown in the specification,is a normalized value; x is data to be processed, namely sample data; xmaxAnd XminThe maximum and minimum values of the sample data, respectively.
The K-means clustering algorithm is a loop iteration algorithm and is realized by the following 4 steps:
(1) selecting K initial central points as initial clustering centers a1,a2,…,ak;
(2) Calculate each sample xiThe distances to the K clustering centers are divided into the classes with the minimum distances;
(3) calculating the mean value of all sample points in the K classes to be used as a clustering center of the next iteration;
(4) repeating the steps (2) and (3) until the clustering center is not changed any more or the iteration times are reached;
step 2: building a deep learning model, inputting the prediction data of the load to obtain the start-stop state of the unit, wherein the input of the deep learning model is the prediction data of the load, and the output is the start-stop state of the unit;
and step 3: substituting the prediction data of the start-stop state and the load of the unit into a unit combination optimization model to obtain a unit output value; the unit combination optimization model is used for constructing an objective function by taking the minimum power generation cost of the thermal power generating unit as a target; and taking system node power balance constraint, thermal power unit output upper and lower limit constraint, thermal power unit climbing constraint, thermal power unit startup and shutdown time constraint and power transmission line tide constraint as constraint conditions.
Preferably, the deep learning model comprises three parts, namely an LSTM neural network layer, a Dropout layer and a full connection layer, the input of the deep learning model is load prediction data, and the output of the deep learning model is a unit start-stop state;
the LSTM neural network layer is based on an RNN improved structure model, a storage unit and a door mechanism are introduced, and information in sentences at the previous moment can be utilized at the current moment; the LSTM neural network consists of 4 parts: namely input gate itAnd an output gate otForgetting door ftAnd memory cell Ct;
In the Dropout layer, Dropout means that in the training process of deep learning, a certain proportion of neurons are randomly disconnected each time parameters are updated, and the dependence of the neurons on certain local characteristics is weakened, so that the generalization capability of the model is improved;
the fully connected layer uses a softmax excitation function as the multi-layer perception of the output layer; the full connection means that all neurons of the current layer and the previous layer of network are connected with each other, and the extracted high-dimensional features are subjected to dimensionality reduction; the layer; and the number of units at the last layer is the same as the number of the classifications at the tail end of the network, and the units are matched with the softmax activation function for use so as to realize the classification of the output features.
Preferably, the input gate itAnd an output gate otForgetting door ftAnd memory cell Ct
The forgetting door ftDetermining whether to retain the memory cell CtThe output of the previous information is expressed as shown in equation (1-1):
ft=σ(Wfhht-1+Wfxxt+bf) (1-1),
in the formula, ht-1The output of the hidden layer at the time t-1; x is the number oftInput for the current time; sigma is a sigmoid activation function; wfhTo forget the door ht-1The weight of (2); wfxX for forgetting to open doortThe weight of (c); bfA bias parameter for a forgetting gate;
the input gate itAnd memory cell CtThe state update of (1) is shown as the formulas (1-2), (1-3) and (1-4):
it=σ(Wihht-1+Wixxt+bi) (1-2),
in the formula, WihIs h of the input gatet-1The weight of (2); wixIs x of the input gatetThe weight of (c); biIs the bias parameter of the input gate;is the state of the memory cell to be updated; tan h is to produceAn activation function of; wchIs composed ofH oft-1The weight of (2); wcxIs composed ofX oftThe weight of (c); bcIs composed ofThe bias parameter of (1). CtAnd Ct-1Memory cell states at times t and t-1;
when the LSTM neural network layer updates the memory cell CtThen, the specific expression of the output state is shown in formulas (1-5) and (1-6):
ot=σ(Wohht-1+Woxxt+bo) (1-5),
ht=ot·tanh(Ct) (1-6) in the formula, WohH being output gatest-1The weight of (2); woxX being output gatestThe weight of (c); boIs the offset parameter of the output gate; h istThe output of the layer is hidden for the current moment.
More preferably, the training process of the deep learning model comprises:
training the depth model by adopting an Adam algorithm, wherein the training process comprises a forward propagation stage and a backward propagation stage; firstly, calculating products of input signals and corresponding weights thereof in forward propagation, and then, acting an activation function on the sum of the products to obtain an error formed by an output result and a true value; then, the formed related error is transmitted back in the backward propagation process of the network, the weight W and the bias b are updated according to the gradient descent method for the gradient of each parameter by calculating the loss function, and the formula (2) and the formula (3) are shown in the specification
In the formula (I), the compound is shown in the specification,for learning rate, L (W, b) is the loss function;
selecting the cross entropy error as a loss function, wherein a calculation formula is shown as a formula (4):
in the formula, yiIs the actual label of the sample and,is a predicted value.
More preferably, in the training process of the deep learning model, the Adam algorithm is adopted to adjust parameters, the mean square error is selected as a loss function, and the calculation formula is shown as formula (5):
in the formula, yiIs the actual value of the data sample,is a prediction value of the data sample.
Preferably, the target function constructed by taking the minimum power generation cost of the thermal power generating unit as the target in the step 3 is characterized as shown in a formula (6);
the system node power balance constraint, the thermal power unit output upper and lower limit constraint, the thermal power unit climbing constraint, the thermal power unit startup and shutdown time constraint and the power transmission line tide constraint are respectively represented as formulas (7) to (11):
ug,tPg,min≤Pg,t≤ug,tPg,max (8),
wherein, Pg,tFor thermal power unit output, C (P)g,t) Representing the coal consumption cost of the unit, C (P)g,t) Is a quadratic function, which needs to be processed by piecewise linearization; SUit、SDitThe start-stop cost of the unit is calculated; u. ofg,tAnd representing the running state of the unit. Pd,tSystem load for time period t; pg,maxAnd Pg,minAre respectively alwaysRegulating the upper limit and the lower limit of the unit output; x is the number ofmnIs the reactance value of the line mn; xon,g,t、Xoff,g,tRepresenting the time that the unit has been continuously started and stopped; t ison,g、Toff,gRepresenting the limitation of the startup and shutdown time of the unit; URiAnd DRiLimiting up and down climbing; PLl,tIs the transmission power of the transmission line;the maximum active transmission capacity of the transmission line; thetamtIs the phase angle of node m.
Drawings
FIG. 1 is a flow chart of a method provided by the present invention;
FIG. 2 is a diagram of an LSTM network architecture;
FIG. 3 is a diagram of the prediction accuracy of the start-stop state of the thermal power generating unit;
Detailed Description
The invention provides a unit combination calculation method combined with deep learning, which is used for preprocessing a large amount of historical data, constructing a deep learning model to obtain a unit start-stop state, and inputting the unit start-stop state into an optimization program to obtain a unit combination plan of the next day. The proposed deep learning model adopts an LSTM neural network, and a mapping relation is obtained after model training is completed by learning a large amount of historical data. In real-time decision making, calling a trained deep learning model, and directly obtaining the start-stop state of the unit; and substituting the result into a unit combination optimization program to obtain a unit output value. The method is introduced with reference to the flowchart of fig. 1, and specifically includes:
step 1: preprocessing a large amount of historical data;
step 2: constructing a deep learning model, inputting load prediction data, and obtaining a unit start-stop state;
and step 3: and substituting the prediction data of the start-stop state and the load of the unit into a unit combination optimization program to obtain a unit output value.
Wherein, the step 1 specifically comprises the following substeps:
substep S11, selecting a minmaxscale method to perform normalization processing on the historical data, as shown in formula (1):
in the formula (I), the compound is shown in the specification,is a normalized value; x is data to be processed; xmaxAnd XminRespectively the maximum value and the minimum value of the sample data;
and a substep S12, dividing the historical data by using a K-means clustering algorithm, wherein the K-means clustering algorithm is a loop iteration algorithm and is realized by the following 4 steps:
(1) selecting K initial central points as initial clustering centers a1,a2,…,ak;
(2) Calculate each sample xiThe distances to the K clustering centers are divided into the classes with the minimum distances;
(3) calculating the mean value of all sample points in the K classes to be used as a clustering center of the next iteration;
(4) and (4) repeating the steps (2) and (3) until the cluster center is not changed any more or the iteration number is reached.
In the step 2, the input of the constructed deep learning model is load prediction data, and the output is a unit start-stop state;
the deep learning model comprises three parts, namely an LSTM layer, a Dropout layer and a full connection layer, and specifically comprises the following steps:
phi an LSTM layer. RNN is a network model that can obtain previous information and perform learning. In practice, RNN may exhibit gradient disappearance or gradient explosion problems over time. To address these problems, some have proposed special circular network architectures, such as LSTM networks.
The LSTM network is based on an RNN improved structure model, a storage unit and a door mechanism are introduced into the LSTM network, and information in sentences at the previous moment can be utilized at the current moment. The LSTM network structure is shown in figure 2.
The basic structure of an LSTM network consists of 4 parts: namely input gate itAnd an output gate otForgetting door ftAnd memory cell Ct. The forgetting gate determines whether to keep the information before the memory unit, and the output of the forgetting gate is as follows:
ft=σ(Wfhht-1+Wfxxt+bf) (1-1)
in the formula, ht-1The output of the hidden layer at the time t-1; x is the number oftInput for the current time; sigma is a sigmoid activation function; wfhTo forget the door ht-1The weight of (2); wfxX for forgetting to open doortThe weight of (c); bfIs the bias parameter of the forgetting gate.
The state updates of the input gate and the memory unit are shown in formulas (1-2), (1-3) and (1-4):
it=σ(Wihht-1+Wixxt+bi)(1-2),
in the formula, WihIs h of the input gatet-1The weight of (2); wixIs x of the input gatetThe weight of (c); biThe bias parameter of the gate is input.Is the state of the memory cell to be updated; tan h is to produceAn activation function of; wchIs composed ofH oft-1The weight of (2); wcxIs composed ofX oftThe weight of (c); bcIs composed ofThe bias parameter of (1). CtAnd Ct-1The memory cell states at time t and t-1. Equation (4) shows that the state of the memory unit is controlled by the history information of the left-behind door control and the input doorAnd (4) jointly determining. After the LSTM network updates the memory unit, the specific expression of the output state is shown in formulas (1-5) and (1-6):
ot=σ(Wohht-1+Woxxt+bo) (1-5),
ht=ot·tanh(Ct) (1-6) in the formula, WohH being output gatest-1The weight of (2); woxX being output gatestThe weight of (c); boIs the bias parameter of the output gate. h istThe output of the layer is hidden for the current moment. Expressions (1-5) illustrate that the output of the LSTM network is determined by the memory unit of the output gate control and by the output gate determines its degree of influence on the result.
(iii) a Dropout layer. In the deep learning problem, overfitting is a common problem. Overfitting means that only training data can be fitted, and data which is not in a training set cannot be well fitted, so that generalization capability is poor. For complex network models, the Dropout method is used to prevent overfitting. Dropout refers to the fact that in the training process of deep learning, a certain proportion of neurons are randomly disconnected each time parameters are updated, the dependence of the neurons on certain local features is weakened, and therefore the generalization capability of the model is improved.
And thirdly, a full connection layer. The fully connected layer uses the softmax excitation function as the multi-layer perception of the output layer, and many other classifiers such as support vector machines also use softmax. "fully connected" means that all neurons of the current layer and the previous layer are connected with each other, and the extracted high-dimensional features are subjected to dimensionality reduction. The layer is generally located at the end of the network, the number of units in the last layer is the same as the number of classifications, and the last layer is often used in combination with a softmax activation function to realize classification of output features.
(2) Training algorithm
The Adam algorithm is adopted to train the network model. The training process of the network comprises two stages of forward propagation and backward propagation. Firstly, the products of the input signals and the corresponding weights thereof are calculated in forward propagation, and then the activation function is applied to the sum of the products, and the obtained output result forms errors with the true value. And (3) returning relevant errors in the back propagation process of the network, and updating the weight W and the bias b according to a gradient descent method by calculating the gradient of each parameter through a loss function:
in the formula (I), the compound is shown in the specification,for the learning rate, L (W, b) is the loss function.
Selecting a cross entropy error as a loss function, wherein the calculation formula is as follows:
in the formula, yiIs the actual label of the sample and,is a predicted value;
selecting a mean square error as a loss function, wherein a calculation formula is shown as a formula (5):
in the formula, yiIs the actual value of the data sample,is a prediction value of the data sample.
And step 3: substituting the prediction data of the start-stop state and the load of the unit into a unit combination optimization model to obtain a unit output value; the unit combination optimization model constructs an objective function by taking the minimum power generation cost of the thermal power unit as a target, and is characterized by being shown as a formula (6):
the method is characterized in that system node power balance constraint, thermal power unit output upper and lower limit constraint, thermal power unit climbing constraint, thermal power unit startup and shutdown time constraint and power transmission line tide constraint are used as constraint conditions and are respectively represented as formulas (7) to (11):
ug,tPg,min≤Pg,t≤ug,tPg,max (8),
wherein, Pg,tFor thermal power unit output, C (P)g,t) Representing the coal consumption cost of the unit, C (P)g,t) Is a quadratic function, which needs to be processed by piecewise linearization; SUit、SDitThe start-stop cost of the unit is calculated; u. ofg,tAnd representing the running state of the unit. Pd,tSystem load for time period t; pg,maxAnd Pg,minRespectively the upper limit and the lower limit of the output of the conventional unit; x is the number ofmnIs the reactance value of the line mn; xon,g,t、Xoff,g,tRepresenting the time that the unit has been continuously started and stopped; t ison,g、Toff,gRepresenting the limitation of the startup and shutdown time of the unit; URiAnd DRiLimiting up and down climbing; PLl,tIs the transmission power of the transmission line;the maximum active transmission capacity of the transmission line; thetamtIs the phase angle of node m.
Example (b):
in order to verify the rationality of the proposed model, an IEEE 30 system was taken as an example, which has 6 conventional units. Calling yalcip and Gurobi-8.0.1 on MATLAB2016a was used herein to generate 100 sets of datasets as 6: 2: and 2, dividing the ratio into a training set, a verification set and a test set, and finishing the preprocessing of the data by utilizing python. And carrying out model building, training and testing under a keras deep learning framework based on the rear end of TensorFlow. The hardware simulation environment is Intel (R) core (TM) i7-6700HQ CPU @2.60GHz, and the size of the running memory is 4 GB. The unit combination optimization model calls YALMIP and Gurobi-8.0.1 on MATLAB2016a to realize programming solution
Fig. 3 shows the accuracy of the output result of the deep learning model, and it can be seen that the deep learning model established by the method can accurately predict the start-stop state of the thermal power generating unit.
Table 1 compares the performance of the two methods:
the method comprises the following steps: obtaining a unit starting and stopping state by using the deep learning model, substituting the unit starting and stopping state and the predicted value of the load into the unit combination optimization model, and obtaining a unit output value;
the second method comprises the following steps: and running a unit combination optimization program according to the load predicted value to obtain a unit starting and stopping state and a unit output value.
TABLE 1 comparison of the Performance of the two methods
As can be seen from the table, method one has more steps than method two, but the decision time is shorter. This is because the time for online decision making is very short, usually less than 0.1s, after the model training is completed in the data-driven approach. Under the condition of obtaining the starting and stopping states of the unit, an integer variable is saved compared with a traditional unit combination model, and at the moment, the unit combination optimization model can be solved more easily, so that the whole decision time is shorter.
Compared with the prior art, the invention has the beneficial effects that: in real-time decision making, calling a trained deep learning model, and directly obtaining the start-stop state of the unit; and the result is substituted into the unit combination optimization program, so that the calculation amount is reduced compared with the traditional unit combination model. Experimental results show that the unit combination plan obtained according to the proposed model can achieve higher calculation precision and improve the solving speed.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
The present invention is not limited to the above embodiments, and any modifications, equivalent replacements, improvements, etc. made within the spirit and principle of the present invention are included in the scope of the claims of the present invention which are filed as the application.