Application of artificial neural networks and genetic algorithms for crude fractional distillation process modeling
📝 Abstract
This work presents the application of the artificial neural networks, trained and structurally optimized by genetic algorithms, for modeling of crude distillation process at PKN ORLEN S.A. refinery. Models for the main fractionator distillation column products were developed using historical data. Quality of the fractions were predicted based on several chosen process variables. The performance of the model was validated using test data. Neural networks used in companion with genetic algorithms proved that they can accurately predict fractions quality shifts, reproducing the results of the standard laboratory analysis. Simple knowledge extraction method from neural network model built was also performed. Genetic algorithms can be successfully utilized in efficient training of large neural networks and finding their optimal structures.
💡 Analysis
This work presents the application of the artificial neural networks, trained and structurally optimized by genetic algorithms, for modeling of crude distillation process at PKN ORLEN S.A. refinery. Models for the main fractionator distillation column products were developed using historical data. Quality of the fractions were predicted based on several chosen process variables. The performance of the model was validated using test data. Neural networks used in companion with genetic algorithms proved that they can accurately predict fractions quality shifts, reproducing the results of the standard laboratory analysis. Simple knowledge extraction method from neural network model built was also performed. Genetic algorithms can be successfully utilized in efficient training of large neural networks and finding their optimal structures.
📄 Content
Abstract—This work presents the application of the
artificial neural networks, trained and structurally optimized
by genetic algorithms, for modeling of crude distillation
process at PKN ORLEN S.A. refinery. Models for the main
fractionator distillation column products were developed using
historical data. Quality of the fractions were predicted based
on several chosen process variables. The performance of the
model was validated using test data. Neural networks used in
companion with genetic algorithms proved that they can
accurately predict fractions quality shifts, reproducing the
results of the standard laboratory analysis. Simple knowledge
extraction method from neural network model built was also
performed. Genetic algorithms can be successfully utilized in
efficient training of large neural networks and finding their
optimal structures.
I. INTRODUCTION
rtificial neural networks (ANN) as well as genetic
algorithms (GA) are popular machine learning
technologies. They were both designed in analogy to
structures and processes occurring in nature. Due to their
desired properties, they are widely applied to solve various
problems of modeling and optimization [1-5]. Although
those techniques are utilized mostly separately, together
they can extend the range of their possible applications.
They can be applied to problems, where it is difficult to
recover a clear analytical solution [6,7]. The example can be
crude fractional distillation, which is a highly nonlinear
process [8]. It also includes many random disturbances
caused by many unpredictable factors like mechanical or
measurement problems [9]. Because of the above obstacles,
data from such process can be perfectly suited to verify the
behaviour and properties of artificial neural networks
trained and optimized by genetic algorithms.
A
The aim of this work is to apply artificial neural
networks, trained and structurally optimized by the genetic
algorithm, to model laboratory quality measurements of
main crude oil fractional distillation products.
A. Artificial neural networks
Artificial neural network is a structure built by many
interconnected basic elements called artificial neurons. It
resembles the natural tissues in brain, which consists from
many nervous cells. Observation of the natural neurons
behaviour resulted in uncovering its basic operation
principles and interesting properties. The first mathematical
description of artificial neuron was done by McCulloch and
Pitts in 1943. Rosenblatt in 1957 refined his idea creating
modern artificial neuron called simple perceptron [10,11].
Perceptron artificial neuron can be perceived as a
transducer of many input signals giving one output. This led
to deriving mathematical description of the neuron, which
is presented on Fig. 1 and characterized by equation (1).
y = F(∑
i=0
n
(ui wi )+w0)
(1)
where:
y – value of the output signal,
F – activation function,
n – number of input signals,
u – value of input signal number i,
w – weight value of the connection number i.
Weights located on input connections are coefficients, which
are set during learning process. They can resemble storage
of gained knowledge. Every input value is multiplied by
weight coefficient and added at the end. This sum is then an
argument for activation function of choice. Activation
function determines the properties of the artificial neuron.
Typically the unipolar sigmoid activation function, defined
by equation (2), is commonly used [10, 11]. Chart showing
its curve was presented on Fig. 2. Results of this function
are included in range from 0 to 1. β parameter in this
equation is responsible for the sigmoid function steepness.
Learning process is basically equivalent to shape
Application of artificial neural networks and genetic algorithms
for crude fractional distillation process modeling
Łukasz Pater
PKN ORLEN S.A. Płock, Poland,
Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Toruń, Poland
Email: pater.lukasz@gmail.com
Fig 1. Artificial neuron – simple perceptron [11].
modification of the activation function. The result of this
function is the output value, which can be final or become
input for another artificial neuron [10,11].
F(x)=
1
1+e−β x
(2)
where:
F – type of activation function,
x – sum of weights multiplied by input values,
β – sigmoid function steepness parameter.
In order to extend the single perceptron classification
capabilities of nonlinear relationships, many artificial
neurons are grouped into many layers, creating multi layer
perceptrons (MLP). Layers between the input and output
layer are called hidden layers. This class of artificial neural
network is widely used for a great majority of problems,
because it can model highly nonlinear data relationships.
Structure of artificial neural network also has to be chosen
carefull
This content is AI-processed based on ArXiv data.