BOILER EXPERT
Since 1983, more than 40 years boiler experience
thermal oil heater
Mar . 22, 2024 15:27 Back to list

Three-objective optimization of boiler combustion process based on multi



Abstract

The combustion optimization problem of Circulation Fluidized Bed Boiler (CFBB) can be regarded as a constrained dynamic multi-objective optimization problem, so it has become a hot research to solve the problem for saving energy and reducing polluting gas. However, it is difficult to optimize the combustion process based on traditional optimization method due to a variety of complex characteristics of boiler, such as non-linearity, strong coupling , large lag. In order to address the boiler combustion optimization problem, a kind of multi-objective modified teaching–learning-based optimization (namely MMTLBO) is proposed. For the MMTLBO, a constrained mechanism is firstly introduced into MMTLBO. Finally, the MMTLBO and ameliorated extreme learning machine (AELM) are utilized to optimize the CFBB’s combustion process for increasing the thermal efficiency and reducing the NOx/SO2 emissions concentration. The AELM is used to establish the comprehensive model of the thermal efficiency and NOx/SO2 emissions. The model accuracy and standard deviation can arrive 10−2 and 10−4, separately. So the model shows high generalization ability and good stability. Based on the model, the MMTLBO is applied to optimize the boiler’s combustion process parameters. Experiment results show that the MMTLBO can find several groups reasonable combustion parameters which increase the thermal efficiency and reduce the NOx/SO2 emissions concentration. Therefore, the AELM and MMTLBO are the effective artificial intelligence algorithms.

Keywords

Multi-objective optimization
Model
Teaching–learning-based​ optimization
Extreme learning machine
Boiler combustion optimization

1. Introduction

Recently, industrial and civilian electricity are mainly derived from the thermal power generation. Therefore, the power plant has consumed large amount of coal resources and emitted lots of polluting gases into the air. We must be paid highly attention to the energy saving and emission reduction which is still the theme of the times. So it is profound necessary to optimize the boiler combustion operation process to improve the thermal efficiency and reduce the polluting gas emission. In recent years, a number of scholars and experts have proposed many methods to settle the boiler combustion optimization problem (Coimbra et al., 1998, Krzywanski et al., 2015, Li et al., 2018, Li et al., 2014, Niu, Li et al., 2019, Rahat et al., 2018, Song et al., 2016, Xu et al., 2000).

In the 1970s, the power plant has paid attention to the boiler combustion optimization problem and found that the combustion adjustment testing was the main method. With the development of artificial intelligent technology, domestic and foreign researchers tried to use machine learning and data mining to optimize the boiler combustion process. For instance, the artificial neural network was used to build the model relation among all operational parameters. Based on the model, the evolution or swarm intelligence algorithms were applied to optimize the boiler combustion process, which could save energy and reduce polluting gases emissions. Suresh et al. (2011) used artificial neural network and genetic algorithm to solve the thermal efficiency optimization problem of one coal fired boiler, and proposed a kind of neural-genetic online optimization algorithm to adjust the corresponding operating parameters for improving the thermal efficiency of boilers. Liu and Bansal (2012) used computational fluid dynamics theory to establish a simple model of boiler heat transfer process, based on the model, the simulation process of fireball and slagging layer was optimized by online genetic algorithm, so that the optimized heat transfer efficiency was enhanced. Pattanayak et al. (2015) developed a boiler soot blowing optimization system using thermodynamic theory and artificial neural network. The optimization algorithm could be used to search the optimal sequence of fan frequency to improve thermal efficiency and reduce pollutant emission. Literature (Madejski, 2018) used CFD model to simulate the combustion process of a pulverized coal boiler, which could increase awareness of the internal conduct of the boiler and find the best boiler combustion state. Literature (Rahat et al., 2018) used new multi-objective evolutionary algorithm and data-driven model to find the equilibrium relationship between nitrogen oxide emission concentration and fly ash carbon content, which effectively solved the contradiction between boiler thermal efficiency and NOx emissions. Literature (Smrekar et al., 2013) used feature selection and auto regressive model to achieve multi-step-ahead prediction of NOx emissions of the coal-based boiler, which could balance the fuel and air flow to the various burners in order to reach minimum NOx formation. Artificial neural network and genetic algorithm were used to establish NOx emission concentration prediction model and optimize adjustable operation parameters to reduce NOx emission concentration (Ilamathi et al., 2013). In Refs. Si, Romero, Yao, Schuster et al. (2009) and Si, Romero, Yao, Xu et al. (2009), an improved adaptive support vector machine was used to establish a dynamic adaptive model of boiler combustion process, which controlled the relationship between parameters and optimization objectives.

One boiler has several complex properties, such as non-linearity, strong coupling and large lag. So it is difficult to control and optimize the combustion process based on traditional optimization method. Moreover, the boiler combustion optimization problem can be regarded as a constrained dynamic multi-objective optimization problem. In order to save energy and reduce polluting gas, it is essential to optimize the boiler combustion process. Therefore, this paper proposes a kind of novel multiple modified teaching–learning based optimization (MTLBO) (Niu, Ma et al., 2019) algorithm, which is called MMTLBO. And the proposed MMTLBO connects with ameliorated extreme learning machine (AELM) (Niu et al., 2018) to improve the thermal efficiency and reduce the NOx/SO2 emissions concentration.

The main contributions of this research can be summarized as follows:

Firstly, based on the MTLBO, a kind of multi-objective version of MTLBO is proposed. For the MMTLBO, three operators are introduced into MTLBO, including archive operator, grid mechanism and leader selection mechanism. An archive operator can maintain non-dominated solutions. A grid mechanism can improve the non-dominated solutions in the archive. A leader selection mechanism can update and replace the solutions in the archive.

Secondly, the MMTLBO and AELM are utilized to enhance the thermal efficiency of boiler and reduce the NOx/SO2 emissions concentration. The AELM is used to build the model of the thermal efficiency and NOx/SO2 emissions. Based on the model, the MMTLBO is applied to optimize the boiler combustion process.

Thirdly, a novel constrained mechanism is firstly introduced in the MMTLBO. During the optimization running process, we propose several novel constraint mechanisms for optimized parameters and offsets, which can ensure safe operation of boiler.

The rest of the paper is organized as follows. Section 2 presents preliminaries of ameliorated extreme learning machine and modified teaching–learning based optimization algorithm. Section 3 proposes the MMTLBO algorithm. The boiler combustion process optimized as well as relevant analysis are presented in Section 4. Finally, Section 5 concludes the work outlines several advises for future work.

2. Preliminaries

2.1. Symbol explanation

In order to understand the following algorithms, the explanations of symbols used in the AELM and MTLBO algorithm are given in the Table 1.

 

Table 1. The explanations of symbols used in the AELM and MTLBO algorithm.

Symbols Explanation
H The output matrix of hidden layer
The input weight
bi The bias of the ith hidden neuron
�̃ The compound weight matrix
The output weight matrix
P A random matrix
g(.) Hidden layer activation function
T The output value
Xnew,i The th new individual
����,� The th old individual
Inertia weight
�1 Inertia weight
�2 Inertia weight
���� Current iteration number
������� Maximum iteration number
����� The best individual, namely teacher
�����ℎ��� Random chosen individual

2.2. Ameliorated extreme learning machine

Since the extreme learning machine (ELM) was proposed, it has attracted wide attention and been successfully applied in various fields. In order to further improve the performance of ELM, we have proposed an ameliorated extreme learning machine (AELM), which has been published in soft computing (Niu et al., 2018). This paper uses the AELM to build the comprehensive model of the thermal efficiency and NOx/SO2 emissions concentration. Now, let us briefly introduce the AELM algorithm as follows.

The AELM has three characteristics. Firstly, the input weights and bias of hidden nodes are assigned based on the training input data. Secondly, the number of hidden layer nodes are set based on one empirical function. Thirdly, the core of the AELM is the novel hidden layer activation function. Moreover, the AELM has the property of data self-adaptive. The detailed computation description of the AELM is given as follows.

2.2.1. The generation of input weights and hidden layer bias

Based on the conventional ELM, the output matrix of hidden layer can be rewritten as follows:(1)�=�(�1�1+�1)⋯�(�1��+�1)⋮⋱⋮�(���1+��)⋯�(����+��)�×�=��1�1+�1⋯�1��+�1⋮⋱⋮���1+��⋯����+��=�(��+�)=�(�˜�˜)where �˜ and �˜ can be denoted as follows, separately.(2)�˜=�1�2⋯��11⋯1(3)�˜=�1�2⋯���1�2⋯���

When the hidden layer activation function �(�) is an revertible function, Eq. (1) can be rewritten as follows:(4)�˜�˜=�−1(��+)

Based on Eq. (4), we assume that there exists a matrix ��×� which satisfies Eq. (5) as follows:(5)��=�−1(��+)

Therefore, a transforming equation (6) can be given based on Eqs. (4) and (5) as follows:(6)�˜�˜=��

Based on the least square method, the compound matrix �˜ can be calculated as follows:(7)�=min�˜‖�˜�˜−��‖(8)�˜=���˜+

Finally, the input weights and bias of hidden layer can be calculated as follows:(9)�=�˜(1:�,1:�)(10)�=�˜(1:�,�+1)

Based on the above-mentioned, if the training data and matrix P are determined, the input weights and bias of hidden layers are also separately calculated according to Eqs. (9), (10) without generating randomly. Therefore, the model parameters of AELM have the property of data self-adaptive.

2.2.2. The determination of hidden layer activation function

In order to enhance the stability of the ELM, a kind of novel hidden layer activation function inspired by the idea of proportion–integration–differentiation mechanism has been proposed (Niu et al., 2018), in which, the integral item helps to eliminate static and the derivative item helps to enhance stability. The proposed activation function is denoted as follows.(11)�(�)=�+�(�)+�′(�)(12)�(�)=11+�−�(13)�′(�)=�(�)(1−�(�))

Seen from Eq. (11), �(�) is the sigmoid function and �′(�) is the derivative function of sigmoid function. According to the mathematical analysis, �(�) is continuity and differentiability, so it can be regarded as hidden layer activation function. In addition, the proposed activation function possesses linearity and nonlinearity property, which can fully exploit the input training data samples. Shown in Eq. (11), the derivative function of sigmoid function is introduced to enhance the stability of the proposed algorithm. Simultaneously, we introduce a linear term to increase the connection between input layer and output layer and strengthen the function of input data samples.

In addition, the AELM has been verified that it shown better stability and regression accuracy than the conventional ELM with the common activation function on most testing data sets in the literature (Niu et al., 2018).

The implementation step of AELM method can be summarized as follows.

Given a training set �=(��,��)|��∈��,��∈��,�=1,2,…,�, the hidden layer activation function is determined based on the training data.

(1) Randomly generate a matrix ��×�, calculate the composite matrix �˜ through Eq. (8).

(2) Determine the input weights  and the bias vector  via Eqs. (9) and (10).

(3) Calculate the hidden layer output matrix .

(4) Calculate the output weights .

2.3. Synopsis of the MTLBO algorithm

In order to improve the convergence rate and solution quality of the conventional TLBO, a kind of modified TLBO algorithm has been proposed, which has been published in International journal of machine learning and cybernetics (Niu, Ma et al., 2019). The MTLBO has been verified by several benchmark testing functions, which revealed that the MTLBO had good convergence rate and fast convergence speed. The MTLBO has two phases: teaching phase and learning phase. Moreover, each phase has novel population updating mechanism that is different from the TLBO. The MTLBO algorithm is described in detail as follows.

2.3.1. Teaching phase

In this phase, a novel group mechanism is introduced. The group mechanism is firstly proposed inspired by the actual teaching–learning phenomenon in a real class. In a real class, there exists superior students and underachievers. Moreover, they present different learning ways for obtaining knowledge. Therefore, the group mechanism can be regarded as a new method to update the population individuals. The group mechanism can be described in detail as follows.

According to the mean mark, all students are divided into two groups. One group includes superior students, the other contains underachievers. We assume that if one student’s comprehensive marks are higher than the mean mark, the student is considered as a superior student. Otherwise, the student is regarded as an underachiever. Note that: For a minimum optimization problem, if the fitness function value of ith student is less than the mean mark’s, the ith student is regarded as superior student. Otherwise, he is considered as an underachiever. For a superior student, he can get knowledge from the best individual and self-study, separately. For an underachiever, he can obtain knowledge from his teacher and try to reach the class average mark. According to the above description, the population individuals updating mechanisms are shown in Eqs. (14), (15), separately. Moreover, the expressions of inertia weights are presented from Eqs. (16) to (18) separately. The detailed explanations of the following equations are presented in (Niu, Ma et al., 2019).(14)����,�=����,�×�+(�����−����,�)×�������(����,�)<�(�����)(15)����,�=(����,�+(����−0.5)×2×(�����−����,�))×�1+����×�2���(����,�)>�(�����)(16)�=�start−(�start−����)×����MaxIter(17)�1=sin(�2×����MaxIter)(18)�2=cos(�2×����MaxIter)

  1. Download : Download high-res image (171KB)
  2. Download : Download full-size image

Fig. 1. The flow chart of boiler combustion process.

2.3.2. Learning phase

In this phase, all students are also divided into two groups based on the fitness function values. The first group includes the top half of the students and the second group contains the remaining students. Moreover, the first group members are regarded as superior students, so they are able not only to obtain knowledge from a more superior student, but also to study independently. The second group members get knowledge from their teacher principally. Therefore, the first group students update their results based on Eq. (19). On the contrary, the second group learners update their results according to Eq. (20).(19)���(����,�)>�(�neighbour)����,�=����,�+(�neighbour−����,�)×cos(�2×����MaxIter)��������,�=����,�+(����−0.5)×2×(�upperlim��−�lowerlim��)���(20)����,�=����,�+(�����−����,�)×cos(�2×����MaxIter)

3. Multi-objective Modified Teaching–Learning Based Optimization Algorithm

The modified teaching–learning based optimization algorithm has been verified that it outperformed some state-of-the-art heuristic optimization algorithms by many single objective benchmark testing functions (Niu, Ma et al., 2019). In order to perform multi-objective optimization by MTLBO, three operation mechanisms are integrated, which were introduced and referenced in the literature (Mirjalili et al., 2016). The first operator is an archive, which can sort non-dominated Pareto optimal solutions obtained so far. The second one is a grid mechanism, which can improve the non-dominated solutions in the archive. The third mechanism is a leader selection strategy, which assists to choose the leader individual from the archive during the optimization process. The implementation of MMTLBO is similar to the MOGWO (Mirjalili et al., 2016).

For the archive, it is a simple storage unit to save and retrieve non-dominated solutions obtained so far. There are four different possible cases showing as follows.

Firstly, the new member is dominated by at least one of the archive residences. In this case the solution should not be allowed to enter the archive.

Secondly, the new solution dominates one or more solutions in the archive. In this case the dominated solution(s) in the archive should be omitted and the new solution will be able to enter the archive.

Thirdly, if neither the new solution nor archive members dominate each other, the new solution should be added to the archive.

Fourthly, if the archive is full, the grid mechanism should be first run to re-arrange the segmentation of the objective space and find the most crowded segment to omit one of its solutions. Then, the new solution should be inserted to the least crowded segment in order to improve the diversity of the final approximated Pareto optimal front.

In MTLBO, the best individual guides the other individuals toward promising regions of the search space with the hope to find a solution close to the global optimum. However, for a multi-objective problem, the solutions cannot easily be compared due to the Pareto optimal concept. The leader selection mechanism is integrated to handle the issue, which chooses the leader individual from the archive during the optimization process via a roulette-wheel method.

After all, the pseudo code of the MMTLBO algorithm is presented as follows.


If you are interested in our products, you can choose to leave your information here, and we will be in touch with you shortly.


en_USEnglish