Contents
lists
available
at
ScienceDirect
Expert
Systems
With
Applications
journal
homepage:
www.elsevier.com/locate/eswa
A
novel
modified
bat
algorithm
hybridizing
by
differential
evolution
algorithm
Gülnur
Yildizdan
a
,
Ömer
Kaan
Baykan
b
,
∗
a Kulu Vocational School, Selcuk University, Kulu, Konya, Turkeyb Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Konya Technical University, Konya, Turkey
a
r
t
i
c
l
e
i
n
f
o
Article history:
Received 27 November 2018 Revised 5 August 2019 Accepted 12 September 2019 Available online 13 September 2019 Keywords:
Heuristic algorithms Bat algorithm
Differential evolution algorithm Continuous optimization Large-scale optimization
a
b
s
t
r
a
c
t
Thebatalgorithm(BA)isoneofthemetaheuristicalgorithmsthatareusedtosolveoptimization prob-lems.Thedifferentialevolution(DE)algorithmisalsoappliedtooptimizationproblemsandhas success-fulexploitationability.Inthisstudy, anadvancedmodifiedBA (MBA)algorithmwasinitiallyproposed bymakingsomemodificationstoimprovetheexplorationandexploitationabilitiesoftheBA.Ahybrid system(MBADE),involvingtheuseoftheMBAinconjunctionwiththeDE,wasthensuggestedinorder tofurtherimprovetheexploitationpotentialandprovidesuperiorperformanceinvarioustestproblem clusters.The proposed hybridsystem usesacommonpopulation,and the algorithmtobe appliedto theindividualisselectedonthebasisofaprobabilityvalue,whichiscalculatedinaccordancewiththe performanceofthealgorithms;thus,theprobabilityofapplyingasuccessfulalgorithmisincreased.The performanceoftheproposedmethodwastestedonfunctionsthathavefrequentlybeenstudied,suchas classicalbenchmarkfunctions,small-scaleCEC2005benchmarkfunctions,large-scaleCEC2010 bench-markfunctions,andCEC2011real-worldproblems.Theobtainedresultswerecomparedwiththeresults obtainedfromthestandardBAandotherfindingsintheliteratureandinterpretedbymeansofstatistical tests.ThedevelopedhybridsystemshowedsuperiorperformancetothestandardBAinalltestproblem setsandproducedmoreacceptableresultswhencomparedtothepublisheddatafortheexisting algo-rithms.Inaddition,thecontributionoftheMBAandDEalgorithmstothehybridsystemwasexamined.
© 2019ElsevierLtd.Allrightsreserved.
1.
Introduction
Metaheuristic
algorithms,
which
have
often
been
used
to
solve
optimization
problems
in
recent
years,
emulate
natural
phenomena
in
order
to
attain
their
desired
purpose.
Metaheuristic
algorithms
exhibit
convergence
characteristics
and
can
produce
results
that
are
close
to
exact
solutions.
These
algorithms
are
widely
used
for
solving
nondeterministic
polynomial-time
hardness
(NP-hard)
op-timization
problems
due
to
their
simple
structures,
being
compre-hensible
and
easily
implemented
(
Karabo
˘ga,
2011
).
Although
meta-heuristic
algorithms
show
excellent
search
capabilities
in
relation
to
small-scale
problems,
they
can
often
exhibit
drastic
reductions
in
search
performance
during
large-scale
problems
(
Tang,
Li,
Sug-anthan,
Yang
&
Weise,
2010
).
The
bat
algorithm
(BA)
is
a
metaheuristic
algorithm
proposed
by
Yang
(2010)
.
Due
to
the
nature
of
this
algorithm,
its
exploita-tion
ability
stands
out
during
the
first
iterations,
while
the
explo-∗ Corresponding author.
E-mail addresses: gavsar@selcuk.edu.tr (G. Yildizdan), okbaykan@ktun.edu.tr (Ö.K. Baykan).
ration
capacity
is
more
pronounced
in
subsequent
iterations.
In
ad-dition,
in
the
following
steps
of
the
algorithm,
the
possibility
of
incorporating
newly
found
solutions
into
the
population
decreases,
which
causes
a
loss
of
better
solutions
found
in
the
solution
space.
The
DE
algorithm
is
often
used
to
solve
real-world
optimization
problems
and
large
scale
global
optimization
problems
due
to
its
simple
structure,
easy
implementation,
strong
mutation
strategy,
and
robustness.
The
ability
of
global
exploration
of
DE
is
power-ful
and
its
convergence
velocity
is
low.
In
this
study,
the
algorithm
was
initially
enhanced
and
a
hybrid
system
was
proposed,
whereby
the
improved
algorithm
and
the
differential
evolution
(DE)
algo-rithm
(
Storn
&
Price,
1997
)
were
combined
in
order
to
balance
the
exploration
and
exploitation
capability
of
the
BA
and
overcome
structural
problems.
The
performance
of
the
proposed
system
was
tested
on
problem
sets
with
different
characteristics
for
various
di-mensions.
This
paper
is
organized
as
follows:
Section
2
summarizes
exist-ing
studies
of
the
BA
and
DE
algorithm;
Section
3
provides
infor-mation
about
the
standard
BA,
while
the
DE
algorithm
is
discussed
in
Section
4
;
the
proposed
method
is
explained
in
Section
5
;
and
the
experimental
results
are
examined
in
Section
6
.
https://doi.org/10.1016/j.eswa.2019.112949 0957-4174/© 2019 Elsevier Ltd. All rights reserved.
2.
Related
work
In
recent
years,
numerous
studies
have
been
performed
with
the
aim
of
improving
the
performance
of
the
BA
and
DE
algo-rithms.
Meng
et
al.
combined
the
BA
with
concepts
such
as
habi-tat
selection
and
the
Doppler
effect,
and
these
modifications
were
seen
to
enhance
the
imitation
of
bat
behaviors
(
Meng,
Gao,
Liu
&
Zhang,
2015
).
Paiva
et
al.
used
a
Cauchy
mutation
operator
and
Elite
Opposition-Based
Learning
structures
to
increase
the
diversity
and
convergence
speed
of
the
BA
(
Paiva,
Silva,
Leite,
Marcone
&
Costa,
2017
).
Cai
et
al.
developed
the
algorithm
by
using
a
triangle-flipping
strategy
in
the
speed
update
formula
of
the
BA,
thereby
affecting
its
global
search
ability
(
Cai
et
al.,
2018
).
Zhu
et
al.
sug-gested
the
quantum-based
bat
algorithm,
which
determined
the
position
of
each
bat
on
the
basis
of
the
current
optimal
solution
during
initial
iterations
and
the
mean
best
position
in
successive
iterations
(
Zhu,
Zhu,
Liu,
Duan
&
Cao,
2016
).
Ghanem
and
Jantan
proposed
a
solution
to
overcome
the
problem
of
the
BA
getting
trapped
in
local
minima,
using
a
special
mutation
operator
to
in-crease
the
diversity
of
the
standard
BA
(
Ghanem
&
Jantan,
2017
).
Shan
and
Cheng
put
forward
an
advanced
BA,
based
on
the
co-variance
adaptive
evolution
process,
to
improve
the
search
capa-bility,
leading
to
diversification
of
the
search
direction
and
the
dis-tribution
of
the
population
(
Shan
&
Cheng,
2017
).
Nawi
et
al.
de-termined
the
random
large
step
length
in
the
BA,
using
the
logic
of
the
Gaussian
random
walk
(
Nawi,
Rehman,
Khan,
Chiroma
&
Herawan,
2016
).
Difficulties
concerning
suboptimal
solutions
in
the
standard
algorithm
and
the
inability
to
solve
large-scale
problems
were
therefore
overcome.
Pan
et
al.
proposed
a
hybrid
structure
by
developing
a
communication
strategy
between
the
BA
and
ar-tificial
bee
colony
(ABC)
algorithm
(
Pan,
Dao,
Kuo
&
Horng,
2014
).
This
method
resulted
in
bad
individuals
in
the
BA
being
replaced
by
good
ones
from
the
ABC
algorithm
at
the
end
of
each
itera-tion.
Conversely,
bad
individuals
in
the
ABC
algorithm
were
also
re-placed
with
good
ones
from
the
BA.
To
enhance
the
bat
algorithms’
exploration
and
exploitation
capabilities
and
overcome
premature
convergence,
directional
echolocation
was
added
to
the
standard
bat
algorithm
by
Chakri,
Khelif,
Benouaret
and
Yang
(2017
).
Qin
and
Suganthan
proposed
a
new
self-adaptive
differential
evolution
algorithm
(
Qin
&
Suganthan,
2005
).
In
this
algorithm,
the
choice
of
learning
strategy
and
the
two
control
parameters,
F
and
CR,
are
not
required
to
be
pre-specified
and
through
the
it-erations,
learning
strategy
and
parameter
settings
are
adapted
ac-cording
to
the
experience
of
the
learning.
Tian
and
Gao
proposed
a
technique
that
combined
a
stochastic
mixed
mutation
strategy
and
an
information
intercrossing
and
sharing
mechanism
with
the
DE
algorithm
(
Tian
&
Gao,
2017
).
Sallam
et
al.
advocated
a
new
DE
algorithm
that
dynamically
determined
the
DE
mutation
strat-egy,
providing
the
best
performance
from
a
given
set,
depending
on
the
performance
history
of
each
operator
and
the
structure
of
the
problem
(
Sallam,
Elsayed,
Sarker
&
Essam,
2017
).
Liao
et
al.
suggested
a
new
DE
algorithm
based
on
cellular
direction
informa-tion.
In
this
technique,
the
neighborhood
is
defined
for
each
indi-vidual
in
the
population
and
included
in
the
neighborhood
muta-tion
operator,
based
on
direction
information
(
Liao,
Cai,
Wang,
Tian
&
Chen,
2016
).
Thus,
convergence
accelerates
towards
the
region
with
the
best
individuals.
Cui
et
al.
divided
the
population
into
three
subpopulations
and
adapted
the
algorithm
to
realize
three
new
DE
strategies
in
which
the
parameters
were
adaptively
set
(
Cui,
Li,
Lin,
Chen
&
Lu,
2016
).
Moreover,
to
increase
performance,
a
replacement
strategy
was
added
to
the
algorithm
to
fully
uti-lize
useful
information
obtained
from
the
trial
and
target
vector.
Yi
et
al.
proposed
a
new
DE
algorithm
including
a
pbest
roulette
wheel
selection
and
a
pbest
retention
mechanism
to
prevent
indi-vidual
accumulation
around
the
pbest
vector
(
Yi,
Zhou,
Gao,
Li
&
Mou,
2016
).
Jadon
et
al.
suggested
a
hybrid
method
that
used
the
ABC
and
DE
algorithms
to
develop
a
more
efficient
metaheuristic
algorithm
than
would
otherwise
be
realized
(
Jadon,
Tiwari,
Sharma
&
Bansal,
2017
).
In
this
structure,
the
DE
algorithm
was
utilized
in
the
onlooker
bee
phase
of
the
ABC
algorithm,
and
the
employed
bee
and
scout
bee
phases
were
modified.
Fan
et
al.
proposed
a
new
DE
algorithm
that
incorporated
a
learning
mechanism
to
ensure
the
adaptation
of
mutation
and
crossover
strategies,
using
prior
knowledge
and
opposition
learning
to
control
and
guide
the
devel-opment
of
control
parameters
throughout
the
evolutionary
process
(
Fan,
Wang
&
Yan,
2017
).
Awad
et
al.
adapted
a
new
crossover
tech-nique
based
on
covariance
learning
with
a
Euclidean
neighborhood
and
a
basic
L-SHADE
algorithm
(
Awad,
Ali,
Suganthan,
Reynolds
&
Shatnawi,
2017
).
Through
this
modification,
L-SHADE
performance
was
enhanced
to
solve
real
world
problems
with
difficult
char-acteristics
and
nonlinear
constraints.
In
addition,
there
are
many
studies
that
have
worked
to
improve
the
performance
of
the
DE
algorithm
for
large
scale
optimization
problems
(
Yang,
Tang
&
Yao,
2008a
),
(
Brest,
Zamuda,
Fister
&
Mau
ˇcec,
2010
),
(
Wang,
Wu,
Rah-namayan
&
Jiang,
2010
).
There
are
many
studies
in
the
literature
trying
to
improve
the
performance
of
the
bat
algorithm,
and
variants
of
this
algorithm
produce
very
successful
results
in
their
area.
However,
as
the
com-plexity
and
the
dimensions
of
the
problem
increases,
the
perfor-mance
of
these
algorithms
continues
to
decline.
Consequently,
in
this
study,
BA
was
developed
and
hybridized
with
DE
to
increase
the
population
diversity,
improve
the
convergence
rate,
and
bal-ance
the
exploration
and
exploitation
of
the
BA
algorithm.
Ac-cording
to
the
experimental
studies,
the
proposed
method
demon-strated
successful
results
on
four
different
test
suites
(classical
benchmark
functions,
small-scale
CEC
2005
benchmark
functions,
large-scale
CEC
2010
benchmark
functions
and
CEC
2011
real-world
problems).
3.
BA
algorithm
The
BA
emulates
the
phenomenon
of
echolocation
in
the
nat-ural
world,
whereby
living
things,
such
as
dolphins,
whales,
bats,
and
some
bird
species,
emit
sound
impulses
at
certain
frequencies.
These
signals
occur
at
a
frequency
of
approximately
20
kHz
and
are
beyond
the
limits
of
human
hearing.
Bats,
in
particular,
are
able
to
identify
their
prey
and
avoid
obstacles
through
the
use
of
this
technique.
The
time
difference
between
the
transmission
and
re-turn
of
the
sound
signal,
in
a
particular
direction,
allows
bats
to
determine
the
distance
between
themselves
and
the
target.
According
to
Yang,
the
BA
is
realized
in
accordance
with
the
following
rules
(
Yang,
2010
):
•
Each
bat
uses
echolocation
to
determine
its
distance
relative
to
the
prey.
•
Bats
fly
with
velocity
v
itowards
a
position
x
iand
transmit
within
a
given
frequency
interval
(
f
min,f
max)
by
propagating
sig-nals
at
various
wavelengths
(
λ
)
and
loudness
(
A
)
to
detect
their
prey.
•
While
calculating
the
distance
to
their
target,
bats
can
adjust
the
wavelength
and
pulse
rate
of
their
signal.
•
It
is
accepted
that
Adecreases
from
a
maximum
(
A
0)
to
a
con-stant
minimum
value
(
A
min).
In
a
D
-dimensional
search
space,
for
the
i
th
bat,
frequency,
velocity,
and
position
values
are
calculated
in
accordance
with
Eqs.
(1)–(3)
,
respectively.
Velocity
and
position
are
updated
at
each
iteration,
while
frequency
is
a
factor
affecting
step
magnitude
in
the
algorithm.
Assuming
β
∈
[0,
1],
where
β
is
a
random
value,
the
term
x
∗indicates
the
best
global
value
at
the
present
moment
in
time.
v
t i=
v
ti−1+
x
t i− x
∗f
i(2)
x
t+1 i=
x ti+
v
ti(3)
For
the
local
search
part
of
the
algorithm,
a
result
is
selected
from
the
best
results
at
the
present
instant
in
time,
and
a
new
result
is
generated
for
each
bat
by
the
local
random
step.
This
is
performed
in
accordance
with
Eq.
(4)
.
x
t+1i
=
x ti
+
ε
A
t(4)
The
value
of
ε
is
a
random
value
in
the
[
−1,
1]
interval
that
indicates
the
magnitude
and
direction
of
the
step.
The
term
A tis
the
average
of
the
present
value
of
Afor
all
bats.
The
frequency
value
in
the
BA
allows
the
pace
and
ranging
of
the
bat
movements
to
be
determined.
When
bats
approach
their
prey
,
A
decreases
and
the
pulse
emission
rate
r
increases,
and
these
values
are
updated
in
accordance
with
Eqs.
(5)
and
6
,
respectively,
where
α
and
γ
are
constants.
For
simplicity,
α
and
γ
can
be
as-sumed
to
be
equal
(
Yang
&
Hossein
Gandomi,
2012
).
A
ti+1=
α
A
ti(5)
r
t i=
r 0i1
− e
(−γt)(6)
The
pseudo-code
of
the
BA
is
given
as
follows.
Algorithm 1 Pseudo-code of the BA.
1. Determine the target function f ( x ). x = ( x 1 , x 2 ,…., x n ) T .
2. Generate bat population x i and initial velocity v i . i = (1, 2, 3, …, n ).
3. Define pulse frequency f i in x i .
4. Initialize values for pulse rate r i and loudness A i .
5. While ( t < maximum iteration number).
6. Update frequency ( f i ), velocity (
v
ti) , and position (xti ) according toEqs. 1, 2, and 3, respectively. 7. If ( rand > r i )
8. Select a result from the best results.
9. Generate a local result around the best result by using Eq. (4). 10. End If
11. If ( rand < A i ) and ( f ( x i ) < f ( x ∗))
12. Accept the new result.
13. Decrease A i and increase r i , according to Eqs. 5 and 6,
respectively. 14. End If
15. Rank the bats and find the best x ∗at the present time. 16. End While
17. Show the results.
4.
DE
algorithm
The
DE
algorithm
was
suggested
by
Storn
and
Price
in
1995,
and
it
is
a
simple
and
powerful
method
that
is
suited
to
solving
opti-mization
problems
(
Storn
&
Price,
1997
).
It
is
a
population-based
algorithm
that
uses
crossover,
mutation,
and
selection
operators.
Important
parameters
of
the
algorithm
are:
•
Population
size
NP;
•Crossover
rate
CR;
•Scaling
factor
F.
The
DE
algorithm
is
initialized
with
NPrandomly
generated
D-dimensional
real-value
parameter
vectors,
which
indicate
candi-date
solutions
for
optimization
problems.
After
the
population
is
generated,
crossover,
mutation,
and
selection
operations
are
ap-plied
to
the
individuals,
to
obtain
the
optimal
result,
until
the
ter-mination
condition
is
reached.
Let
G=
0,
1,
2,
3,…..,
G maxindicate
the
next
generations;
the
ith
individual
of
the
present
generation
is
expressed
as
follows:
X
i,G=
X
1 i,G, X
i2,G,
X
i3,G,
.
.
.
.
.
.
,
X
i,Gjj
=
1
,
2
,
3
,
.
.
.
.
.
.
..,
D
(7)
4.1. Mutation operation
In
the
DE
algorithm,
the
mutant
vector
V
i,Gis
generated
by
using
a
mutation
operator
for
each
X
i,Gin
each
generation.
The
terms
X
r1,G,
X
r2,G,
X
r3,G,
X
r4,G,
X
r5,Gare
random
individuals
se-lected
from
the
population
and
Fis
a
scaling
factor
that
deter-mines
how
far
the
search
operation
will
be
performed.
The
term
X
best,Gis
the
best
individual
of
the
present
population.
Accordingly,
the
approaches
used
to
generate
the
mutant
vector
are
shown
in
Eqs.
(8)
to
14
(
Epitropakis,
Tasoulis,
Pavlidis,
Plagianakos
&
Vra-hatis,
2011
;
Qin
&
Suganthan,
2005
).
1
.
DE
/
rand
/
1
V
i,G=
X
r1,G+
F
∗
(
X
r2,G− X
r3,G)
(8)
2
.
DE
/
best
/
1
V
i,G=
X
best,G+
F
∗
(
X
r1,G− X
r2,G)
(9)
3
.
DE
/
rand
/
2
V
i,G=
X
r1,G+
F
∗
(
X
r2,G− X
r3,G)
+
F
∗
(
X
r4,G− X
r5,G)
(10)
4
.
DE
/
best
/
2
V
i,G=
X
best,G+
F
∗
(
X
r1,G− X
r2,G)
+
F
∗
(
X
r3,G− X
r4,G)
(11)
5
.
DE
/
current
− to
− best
/
1
V
i,G=
X i,G+
F
∗
X
best,G− X
i,G+
F
∗
(
X
r1,G− X
r2,G)
(12)
6
.
DE
/
current
− to
− rand
/
1
V
i,G=
X
i,G+
F
∗
(
X
r1,G− X
i,G)
+
F
∗
(
X
r2,G− X
r3,G)
(13)
7
.
DE
/
rand
− to
− best
/
2
V
i,G=
X
i,G+
F
∗
X
best,G− X
i,G+
F
∗
(
X
r1,G− X
r2,G)
+
F
∗
(
X
r3,G− X
r4,G)
(14)
4.2. Crossover operation
Binomial
crossover
is
applied
to
each
pair
of
V i,Gmutant
vectors
and
its
related
X i,Gtarget
vector.
Therefore,
the
U i,Gtrial
vector
is
obtained
in
accordance
with
the
following:
U
ij,G=
v
j i,G,
i f ran
d
i, j[
0
,
1
]
≤ C
ror j
=
j randj
=
1
,
2
.
.
.
..,
NP
X
ij,G,
otherwise
(15)
where
C ris
a
crossover
control
parameter,
which
can
assume
val-ues
of
between
0
and
1,
and
indicates
the
probability
of
generating
a
parameter
from
a
mutant
vector
for
a
trial
vector.
A
randomly
selected
integer,
j rand∈
[0,
D],
ensures
that
the
trial
vector
(
U
i,Gj)
differs
from
the
target
vector
(
X
ij,G)
by
a
minimum
of
one
dimen-sion.
4.3. Selection operation
The
selection
operation
determines
whether
the
trial
(
U
i,Gj)
vector
or
target
vector
(
X
i,Gj)
is
used.
The
former
is
trans-ferred
to
the
next
generation
if
it
has
a
better
fitness
value
than
the
target
vector.
X
i,G+1=
U
i,G,
i f
f
(
U
i,G)
≤ f
(
X
i,G)
X
i,G,
otherwise
(16)
Algorithm 2 Pseudo-code of DE Algorithm.
1. Generate the initial population ( X i,G , i = 1, 2, …, NP ).
2. While G ≤ Gmax . 3. For i = 1: NP
4. Generate the mutant vector ( V i,G ) according to an approach selected
from Eqs. 8–14.
5. Generate the trial vector ( U i,G ) according to Eq. (15).
6. If f ( U i,G ) ≤ f ( X i,G ) 7. Xi,G+ 1 = U i,G+ 1 8. End If 9. End For 10. G = G + 1 11. End While 12. End
5.
Proposed
method
In
optimization
algorithms,
it
is
necessary
to
establish
a
suitable
balance
between
exploitation
and
exploration
abilities,
and
the
rand
Aparameters
in
the
BA
are
effective
in
determining
this.
In
the
algorithm,
as
the
iteration
progresses,
the
value
of
rincreases
while
A
decreases.
This
means
that
exploitation
will
be
imple-mented
in
the
first
steps
of
the
iteration,
and
exploration
will
be
applied
in
the
subsequent
stages
(
Yilmaz
&
Kucuksille,
2013
).
How-ever,
in
large-scale
problems,
and
those
with
a
wide
search
space,
this
characteristic
causes
the
algorithm
to
get
trapped
in
local
min-ima.
In
this
study,
some
modifications
were
made
to
the
algorithm
in
order
to
overcome
these
structural
problems,
and
the
resulting
modified
BA
(MBA)
algorithm
was
subsequently
combined
with
the
DE
to
establish
a
hybrid
system.
Within
this
structure,
the
muta-tion
phase
of
the
DE
algorithm
utilized
the
self-adaptive
approach
proposed
by
Qin
and
Suganthan
(
Qin
&
Suganthan,
2005
).
5.1. Modified bat algorithm (MBA)
When
the
position
update
equation
of
the
standard
BA
is
exam-ined,
it
is
observed
that
bats
in
the
population
converge
rapidly
to-wards
the
bat
producing
the
best
solution
in
the
system.
This
char-acteristic
weakens
the
global
search
capability,
and
two
new
posi-tion
determination
strategies
were
therefore
developed,
which
are
represented
by
Eqs.
(17)
and
18
.
In
the
first
strategy,
the
method
proposed
by
Chakri
et
al.
was
developed,
and
a
weight
coefficient
(
a
)
was
added
to
the
equation
(
Chakri
et
al.,
2017
).
Thus,
the
ef-fect
of
the
best
solution
in
determining
the
new
position
was
re-duced.
In
the
second
strategy,
the
search
strategy
of
ABC
algorithm
was
utilized,
and
the
new
position
was
ascertained
by
using
the
difference
between
the
random
dimension
of
a
randomly
selected
neighboring
bat
and
that
of
the
existing
bat.
As
a
result
of
these
novelties,
the
speed
of
convergence
of
individuals
towards
the
best
solution
was
reduced
and
population
diversity
was
increased.
Let
j=
1,
2,…..,
D,
where
jis
a
randomly
selected
dimension.
x
t+1 i=
x
ti+
a
∗
x
best− x
ti∗ f 1
+
x
t i− x
tk1∗ f 2
i f
f
x
t k1<
fx
t ix
t+1 i, j=
x
ti, j+
(
rand
− 0
.
5
)
∗ 2
∗
x
t i, j− x
ti,k2otherwise
(17)
f 1
=
f min+
(
f max
− f min
)
∗ rand
f 2
=
f min+
(
f max
− f min
)
∗ rand
(18)
where
k 1,
k
2are
random
individuals
selected
from
the
population,
x
bestis
the
best
global
value,
ais
0.7,
f minis
0,
and
f
maxis
2.
The
value
of
a
was
chosen
after
the
results
of
a
test
that
was
performed
to
determine
the
optimum
value
of
aon
classical
functions
in
Ex-periment
1.
In
this
test,
the
value
of
awas
increased
in
the
range
of
[0.1–1]
with
0.1
intervals.
For
every
incremented
value
of
a,
the
functions
were
run
again
and
the
results
were
compared.
Accord-ing
to
the
test,
the
optimum
value
of
awas
determined
as
0.7.
The
second
modification
was
introduced
to
the
local
search
strategy,
whereby
the
w ti
parameter
in
the
position
update
equa-tion
(
Eq.
(19)
)
was
exponentially
decreased
in
accordance
with
Eq.
(21)
,
subject
to
the
number
of
iterations.
In
Eq.
(20)
,
the
ini-tial
value
of
w
0was
selected
according
to
the
upper
and
lower
bounds
of
functions.
The
graphic
of
w ti
is
given
in
Fig.
1
.
For
this
graphic,
upper
band,
lower
band
and
iteration
number
were
se-lected
as
100,
−100,
and
100,
respectively.
According
to
Eq.
(21)
,
w
ti
values were calculated.
In
Fig. 1
,
the red
line shows the
expo-nential
decrease
of
w
ti
and
the
blue
line
shows
the
effect
of
the
“rand” parameter
in
Eq.
(21)
.
The
graphic
shows
that
the
“rand”
parameter
contributes
to
the
diversity
of
the
step
size.
So,
accord-ing
to
Eq.
(19)
,
the
new
position
determination
initially
occurred
with
large
steps,
which
decreased
in
size
in
subsequent
iterations.
Thus,
the
problem
of
rapid
convergence
of
bats
towards
the
best
solution
was
partially
overcome.
x
ti+1=
x ti+
ε
∗ A
t∗w
t i(19)
w
0=
ub
− lb
4
(20)
w
ti=
w 0∗rand
∗2
∗exp
−5
∗
t
t
max(21)
where
ubis
the
upper
bound
of
variable
x i,
lb
is
the
lower
bound
of
variable
x i, t
is
the
present
number
of
iterations,
and
t maxis
the
maximum
number
of
iterations.
Moreover,
the
rand
Aparameters
were
arranged
in
accordance
with
the
proposals
put
forward
by
Chakri
et
al.
and
decreased
lin-early,
as
shown
in
Eqs.
(22)
and
23
(
Chakri
et
al.,
2017
).
r
t i=
r
f irst− r
end1
− t
max∗
(
t
− t
max)
+
r
end(22)
A
t i=
A
f irst− A
end1
− t
max∗
(
t
− t
max)
+
A
end(23)
where
r
first=
0.1,
r end=
0.7,
A first=
0.9
and
A end=
0.6.
The
pseudo-code
for
the
MBA
is
given
in
Algorithm
3
(Between
the
lines
13
to
33).
5.2. Hybrid system (MBADE)
In
this
study,
various
modifications
were
initially
applied
to
the
BA,
prior
to
developing
the
MBA.
Subsequent
testing
of
the
latter,
using
classic
benchmark
functions,
CEC
2005
functions,
large-
scale
CEC
2010
functions,
and
CEC
2011
real-world
problems,
revealed
that
the
performance
of
the
algorithm
decreased
as
the
complex-ity
and
dimensions
of
the
problem
increased.
The
DE
algorithm
has
a
powerful
exploitation
ability
that
achieves
information
exchange
among
randomly
selected
individuals
based
on
different
strategies.
The
ability
of
global
exploration
of
DE
is
powerful
and
its
conver-gence
velocity
is
low.
For
this
reason,
the
MBADE
algorithm
was
proposed,
in
which
the
MBA
was
used
in
conjunction
with
the
DE
algorithm
to
support
exploration-exploitation
balance.
The
logic
of
the
self-adaptive
DE
(SaDE)
was
adopted
for
the
mutation
selection
strategy
in
the
DE
algorithm
used
in
the
hybrid
system
(
Qin
&
Suganthan,
2005
).
Accordingly,
two
mutation
strate-gies
(DE/rand/1
and
DE/current-to-best/1)
were
selected
and,
after
a
learning
period
of
50
iterations,
the
mutation
strategy
to
be
ap-plied
to
the
individual
was
determined
according
to
a
probability
value
that
was
based
on
their
previous
performance.
The
aim
of
the
learning
period
is
to
examine
the
contribution
of
the
selected
mutation
strategies
to
performance
and
to
determine
a
probabil-ity
value
that
is
proportional
to
the
contribution
of
each
mutation
strategy
to
the
solution.
After
the
learning
period,
the
strategy
to
be
applied
to
the
indi-vidual
was
decided
according
to
this
probability
value
(P
m).
Thus,
Algorithm 3 Pseudo-code of Hybrid Algorithm (MBADE).
1.Generateinitialpopulation(xi,i=1,2,…,NP)//NPispopulationsize
2.GenerateallparametersusedforDEandMBAandassigninitialvalues. 3.Findthebestvalue(Xbest)ofthepopulation.
4.While(Z<maxcyclenumber)
5.Definethecountersusedinselectionofsearchmethod(NsMBA= 1,NsDE= 1,NfMBA= 1,NfDE= 1).
6.Definethecountersusedinselectionofmutationstrategy(ms1=1,ms2=1,mf1=1,mf2=1). 7.Definethecounterusedinlearningprocess(teach_counter=1)
8.Defineanarrayforthelearningprocess(Learning_set[1,50]) 9.Definetheinitialvaluesofrfirst,rend,AfirstandAend
10.Fort=1:tmax
11. Calculateprobability(PMBA)forthesearchmethodselectionaccordingtoEq.(25).
12. Fori=1:NP
13. If(rand<PMBA)//TheMBAalgorithmisselected.//
14. Selectk1,k2randomindividuals. 15. Generatef1andf2accordingtoEq.(18). 16. Generateatrialsolution(xt+1
i )accordingtoEq.(17).
17. If(t==1) 18. Calculatedrt
iandAtivaluesaccordingtoEq.22and23,respectively.
19. EndIf
20. If(rand>rt i )
21. Calculatewt
ivalueaccordingtoEq.(21).
22. Generateatrialsolution(xt+1
i )accordingtoEq.(19).
23. EndIf
24. If (rand<At
i ) and (f(xit ) +1 <f(xti ))
25. Acceptthenewresult(xt+1
i )
26. Increasert
ianddecreaseAtiaccordingtoEq.22and23,respectively.
27. NsMBA=NsMBA+1; 28. If(f(xt+1 i ) <f(Xbest ) ) 29. UpdateXbest 30. EndIf 31. Else 32. NfMBA=NfMBA+1; 33. EndIf 34.
35. Else//TheDEalgorithmisselected.//
36. Selectthreeindividualsrandomlyfromthepopulation(r1,r2,r3) 37. If(teach_couter<=50)//Learningprocess//
38. If(Learning_set(1,teach_counter)<=0.5)
39. Generateamutantvector(Vi,t)accordingtoEq.(8).
40. m_strategy=1;
41. Else
42. Generateamutantvector(Vi,t)accordingtoEq.(12).
43. m_strategy=2;
44. EndIf
45. teach_counter=teach_counter+1;
46. Else
47. Calculateprobability(Pm)forthemutationstrategyselectionaccordingtoEq.(24)
48. If(rand<Pm)
49. Generateamutantvector(Vi,t)accordingtoEq.(8).
50. m_strategy=1;
51. Else
52. Generateamutantvector(Vi,t)accordingtoEq.(12).
53. m_strategy=2
54. EndIf
55. EndIf
56. Generatethetrialvector(Ui,t)accordingtoEq.(15).
57. If(f(Ui,t ) <f(xti ) )
58. Acceptthenewresult(Ui,t)
59. If(f(Ui,t)<f(Xbest)) 60. UpdateXbest 61. EndIf 62. If(m_strategy==1) 63. ms1=ms1+1; 64. Else 65. ms2=ms2+1; 66. EndIf 67. NsDE=NsDE+1; 68. Else 69. If(m_strategy==1) 70. mf1=mf1+1; 71. Else 72. mf2=mf2+1; 73. EndIf 74. NfDE=NfDE+1; 75. EndIf 76. EndIf 77. EndFor
78. If(mod(t,2500)==0)//resettheeffectofpreviousexperiences//
79. NsDE=1,NfDE=1,NsBA=1,NfBA=1; 80. teach_counter=1; 81. ms1=1,ms2=1,mf1=1,mf2=1; 82. EndIf 83. EndFor 84. Z=Z+1; 85.EndWhile 86.Displaytheresults.
Fig. 1. Changing of parameter w t i .