• Sonuç bulunamadı

Symmetries and boundary conditions of integrable nonlinear partial differential equations

N/A
N/A
Protected

Academic year: 2021

Share "Symmetries and boundary conditions of integrable nonlinear partial differential equations"

Copied!
114
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

'it* '; / ·· Τ.Λ·?

ЧІ ¿'. ^UilP^.W· ·

в’*-·-• '·ν··№·^ ^;. ѵ«ѵ :.■.;> f· ·;;■

i т а ш ( · ί ® » ι « κ ® s ? .V - ; '•ί?'’ ;^.· ./vi. ψ -rv ^· •■'Λ· 'í

(2)

SYiVIMETRIES AND BOUNDARY CONDITIONS OF

INTEGRABLE NONLINEAR PARTIAL DIFFERENTIAL

EQUATIONS

A DISSERTATION

SUBMITTED TO THE DEPARTMENT OF MATHEMATICS AND THE INSTITUTE OF ENGINEERING AND SCIENCES

OF BILKENT UNIVERSITY

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

By

T. B urak Gürel Septem ber, 1999

(3)

Q A 3 Ψ ψ

(4)

I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and in quality, as a thesis for the degree of Doctor of Philosophy.

Prof. Metin Gürses (Principal Advisor)

I certify that I have read this dissertation and that in my opinion it is fuUy adequate, in scope and in quality, as a thesis for the degree of Doctor of Philosophy.

Prof. Okay Çelebi

(5)

I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and in quality, as a thesis for the degree of Doctor of Philosophy.

Prof. Varga Kalantarov

I certify th at I have read this dissertation and that in my opinion it is fuUy adequate, in scope and in quality, as a thesis for the degree of Doctor of Philosophy.

Approved for the Institute of Engineering and Sciences:

Prof. Mehmet B a ra ^

(6)

ABSTRACT

SYMMETRIES AND BOUNDARY CONDITIONS OF

INTEGRABLE NONLINEAR PARTIAL DIFFERENTIAL

EQUATIONS

T. Burak Gürel Ph.D. in M athem atics Advisor: Prof. Metin Giirses

Septem ber, 1999

The solutions of initial-boundary value problems for integrable nonlinear partial dif­ ferential equations have been one the most important problems in integrable systems on one hand, and on the other hand these kind problems have proved to be very hard especially when considered on half or bounded lines. The proper generalization of the Inverse Spectral Transform or any other possible method in a way that they apply on half or bounded lines, is a complicated problem itself. But one of the other obstacles is the choice of suitable boundary conditions. In this direction, there is a pioneering work of Sklyanin which has motivated us in considering the problem of establishing boundary conditions for integrable partial differential equations. To this end, we try to develop a way to find boundary conditions which would, in turn, be suitable for certain solution techniques. This could have been done in many dif­ ferent ways depending upon what is understood from integrability. Throughout this work, we use the phrase integrability \i\ the sense of generalized symmetries which has proved to be one of the most efficient approaches. We first give a proper definition of compatibility of boundary condition with a symmetry. Then we interpret the well

(7)

known tools of symmetry approach in a different manner. These tools include the recursion operators and symmetries themselves. After some technical theorems, we pass to examples and consider many integrable equations. Furthermore, we give, a generalization of the method which makes use of the non-homogeneous symmetries.

Finally we finish by some discrete equations, including the 2D Toda lattice.

It is crucial to note that all the boundary conditions that have been already known to be compatible with the integrability property of the original equation, pass our criterion of compatibility.

Keywords and Phrases: Integrability, symmetry, recursion operator, boundary condition, lattice equation.

(8)

o z

ENTEGRE EDİLEBİLİR DOĞRUSAL OLMAYAN

DİFERENSİYEL DENKLEMLERİN SİMETRİLERİ VE SINIR

ŞARTLARI

T. Burak Gürel

M atem atik Bölümü D oktora Danışman: Prof. M etin Gürses

Eylül, 1999

Entegre edilebilir doğrusal olmayan diferensiyel denklemler için ilk-sınır değer prob­ lemleri çok önemli olmakla beraber, özellikle kapalı yada yarı kapalı doğrular üzerinde ele alınınca oldukça zor problemlerdir. Ters spektral dönüşüm yada kul­ lanılmakta olan başka bir metodun bu doğrultuda genellenmesi dışında, uygun sınır şartlarının kullanılması da bu zorluğun bir parçasıdır. Sklyanin’in uygun sınır şartları konusundaki öncü çalışmasından esinlenerek, simetri tekniklerini bu yönde bir metot geliştirmek için kullandık. Bu amaçla ilk olarak, bir sınır şartının genel simetrilerle uyumlu olması tanımını verdik. Daha sonra tekrarlama operatörü ve simetriler için yeni ve amacımıza daha uygun forjna-tlar geliştirdik. Gerekli teoremleri kanıtladıktan sonra metodumuzu entegre edilebilir diferensiyel denklemlerin bir çoğuna uyguladık, ve homojen olmayan simetrileri de kullanarak metodumuzu genelledik. En son olarak ayrık denklemleri ele aldık. Bu sınıftaki en önemli denklemlerden biri olan iki boyutlu ayrık Toda denklemini de inceledik. Geliştirdiğimiz test sayesinde, incelediğimiz den­ klemlerin birçoğu için yeni sınır şartlan bulduk.

(9)

Daha önceden entegre edilebilirlik özelliği ile uyumlu olduğu bilinen sınır .şartlarının hepsinin bizim testimizden geçmesi, yaklaşımımızı doğrulayan önemli bir nokta olduğu kanısındayız.

Anahtar Kelimeler ve ifadeler: Entegre edilebilirlik, simetri, tekrarlama op­ eratörü, sınır şartı, ayrık denklem.

(10)

ACKNOWLEDGEMENTS

In the first place, I would like to express my deep gratitude to my supervisor Prof. Metin Gürses for his constant and continual support, not only for this thesis but also for other aspects of my life.

Special thanks go to Prof. Ismagil Habibullin who, in a sense, trained me during my early times in the field, and has collaborated with me since then.

I am also indebted to the members of the examination commitee who have read this thesis and commented on it. I would also like to acknowledge the valuable comments of Prof. Allan P. Fordy on this thesis.

I wish to thank Bediz, my wife, who has a tremendous effect in my life, and with whom the obscured beauties have been revealed, for everything she has done for us.

I am grateful also to my friends, sister and parents with whom we have shared (generally) good, and (occasionally) bad times for many years.

Finally, I gratefully acknowledge the financial supports of Bilkent University and TÜBİTAK over my PhD years.

(11)

T a b le o f C o n te n ts

1 I n tr o d u c tio n 1

1.1 History and discovery of solitons 1

1.2 Concept of iiitegrability 4

1.3 Different approaches to integrability 6

1.4 Boundary conditions and integrability 9

2 S y m m e trie s of D ifferen tial E q u a tio n s 11

2.1 Lie s y m m e trie s ... 2.2 Generalized s y m m e trie s ... 18

2.3 Recursion operators 80

3 I n te g r a b le B o u n d a ry C o n d itio n s 36

3.1 Basic definitions and s e t t i n g s ... 37 3.2 Main th e o r e m s ... 3.3 The Burgers equation and the uniqueness p r o b le m ... 46 3.4 The KdV equation ... 31 3.5 The modified KdV e q u a tio n ... 33

3.6 The Harry-Dym equation 35

(12)

4 N o n -h o m o g en eo u s S y m m e trie s an d B o u n d a ry C o n d itio n s 60 4.1 The KdV e q u a tio n ... 62 4.2 The mKdV, pKdV, mpKdV and cKdV e q u a tio n s ... 65 4.3 Hyperbolic-type e q u a tio n s... 68

5 N o n lin e a r L a ttic e E q u a tio n s 74

5.1 The Volterra lattice 78

5.2 The 2D Toda l a t t i c e ... 82

(13)

C h a p te r 1

I n tr o d u c tio n

In this chapter, we give a brief history of solitons and integrability, and hints of various approaches to the problem of integrability, symmetry approach being our main point of view. It is natural to start the subject with some historical remarks about the discovery and understanding of the soliton phenomenon.

1.1

H istory and discovery o f solitons

The first observation of a solitary wave was in 1834 and not appreciated until mid- 1960’s. This first observation was made by John Scott Russell, a naval engineer, from horseback. He realized that a water wave, pushed forward by a boat on the Union Canal, Scotland, continued its motion for a remarkably long time without losing either its amplitude or speed. This was Russell’s first interview with that singular and beautiful phenomenon, as Russell himself described [15]. Russell devised and regularly performed experiments which reproduced these waves in the laboratory. Since Russell’s results were all phenomenological, with no theoretical derivation, there was a good deal of controversy, at the time, between Russell and leading scientists, such as Airy and Stokes [23]. Despite some attempts by Russell to guess the analytical formula for the wave profile, his observation went unexplained in his own lifetime. His observation had to wait until 1895 for the initial theoretical confirmation.

(14)

equation for the propagation of waves in one direction on the surface of a shallow channel [44]:

dr] i d f 2 I 2 1

where ri{x, t) represents the (small) elevation of the surface of water above the normal depth I, of the canal, a = 1^/3 — Tl/pg^ with T the surface tension and p the density of the fluid. With a rescaling:

rj = au, ( = bx, T = ct, of variables this can be written:

Ut — + 6uu^,

for suitable a,b,c, where suffices mean partial derivatives. After a Galilean transfor­ mation:

U — u , A = T = T ,

we obtain the KdV (Korteweg-de Vries) equation in its famous form which we shall use throughout this work:

Ut = Ux x x + QUUx. (1.1)

We can easily find the travelling wave solutions of this equation: i/( Z ,T ) = ¥^(0 , C = -Y + c r ,

so th at Ut = C(p', Ux = ip', U x x x = p'", where c is an arbitrary constant repre­ senting the speed of the wave. Then p{C) satisfies the ordinary differential equation:

p'" -f- 6pp' - cp' = 0,

which can be solved to give:

(^(0 = ^c sech^ Q \/c(C - (5)^ , ( = X + cT,

where arbitrary constant S is called the phase, determining the position of the initial peak. This solitary wave solution of the KdV equation, for c = 1, is same as the equation of the wave, observed by Russell in 1834 [23].

(15)

After the work of Korteweg and de Vries [44], there came famous Fermi-Pasta- Ulam paper [18], in 1955, in which they showed that some nonlinear systems behaved like linear systems. In nonlinear systems one would expect that nonlinear interac­ tions between the modes would lead to the energy of the system being distributed throughout all of the modes while the energy in each of the normal modes of the linear system would be constant. In their paper, Fermi-Pasta-Ulam showed that this is not true for some nonlinear systems which was, at the time, accepted as a strange result. The KdV equation, being nonlinear, was one of these strange equations. Another such equation was the Boussinesq equation, derived by Boussinesq in 1872 [11], sharing the same property with the KdV equation. The understanding of whys of this phenomenon had to wait again until 1965.

The KdV equation became famous after the work of Zabusky and Kruskal in 1965 [70]. They were studying the Fermi-Pasta-Ulam problem and the KdV equation as one of the continuum limits of the problem. They numerically simulated the solutions of the KdV equation with periodic boundary conditions. They observed that as time evolved the behaviour of the waves became like those of solitary wave solutions since the nonlinear term UUx and the dissipative term U x x x began to balance each other. Because of this balance between nonlinearity and dissipation, the waves achieved a steady amplitude, which is the characteristic of a solitary wave. But the stunning result came next. As the waves travelled, the larger ones, being faster, caught up the smaller ones and they underwent a fuUy nonlinear interaction. Even though there happened a nonlinear interaction, these waves retained their height, width and speed, undergoing nothing but a phase shift. Because of the particle-like nature of these interacting solitary waves, Zabusky and Kruskal coined the name soliton to describe them.

Stimulated by this numerical experiment, Kruskal and his co-workers carried out an analytic study of the KdV equation [27], which culminated in the discovery of a new method for solving a certain class of nonlinear partial differential equations. This method, which is called the Inverse Spectral Transform (1ST) method, is a nonlinear generalization of the Fourier Transform method which can be applied to linear partial differential equations. This method was used in constructing the exact

(16)

solutions which represented interacting solitons, and gave a solution to the general initial value problem with special boundary conditions. After tlie use of 1ST in solving the KdV equation, many other equations followed and it was understood that it was not only the KdV equation exhibiting soliton behaviour and gradually tills development created a vast area of research in mathematical physics. Not only many other new equations were found, but also more direct and simpler ways of obtaining such special solutions were developed.

It is important and necessary to note that not all the equations which have soli­ tary wave solutions, have soliton solutions. Hence, the difference between a solitary wave and a soliton must be understood.

Another important remark before ending the section might be to say that the differential equations studied from this point of view, arose in a great variety of appli­ cations; physical and biological sciences, engineering, computer architecture, protein dynamics, optical fibre technology and string theory, to name just a few. Also to solve the problems in .soliton theory, a great number of new mathematical techniques have had to be developed and these, in turn, stimulated a tremendous amount of re­ search in mathematics, such as in quantum groups, loop groups, algebraic geometry and differential geometry.

1.2

C oncept o f integrability

Iiitegrability of an equation, in the general sense, means finding explicit solutions to the initial value problem with suitable boundary conditions. Classically, this has been done in two ways: using explicit transformations to map the initial nonlinear equation to a linear equation or employing the related isospectral eigenvalue problem [19].

The first way is rather trivial and often le.ss useful. The typical equation to which the first method applies, is the Burgers equation:

U t = Uxa; -f 2 u U x , (1.2)

(17)

This equation is linearizable by an explicit transformation, namely by Cole-Hopf transformation:

u = (1.3)

Cole-Hopf transformation (1.3) maps the Burgers equation ( 1.2) onto the heat equa­ tion:

vt =

which is linear. This type of integrability is often referred as C-integrability [12]. The second one is more complicated and uses techniques from spectral theory, linear integral equations, complex analysis, algebraic geometry, etc. The KdV equa­ tion can be considered as the prototype example being the first etiuation to which this method is applied [27].

Equations which are exactly solvable, exhibit remarkable properties such as they possess infinitely many symmetries, infinitely rna,ny conserved densities (if the model is conservative), Backlund transformations, nontrivial prolongation structures, the Painleve property (perhaps after a suitable change of variables), etc. So we can naturally raise the following questions: 1. Is there a universal property defining integrability? 2. Is there a test for integrability? 3. Can we give a proper definition of integrability, in accordance with questions 1 and 2?

Of course, there have been many attem pts to answer the above questions, but none of them can be considered as fully successful. Nonetheless, they all underline certain remarkable properties of the nonlinear partial differential equation under examination. We give a brief overview of some methods dealing with the problem of integrability in the next section.

We have to underline a very basic behaviour of integrable equations in the light of the work of Fermi-Pasta-Ulam. Even though they are nonlinear equations, they exhibit the behaviour of linear equations. Hence we can state that linearization lies at the heart of integrability [59].

(18)

1.3

D ifferent approaches to integrability

Here we shall give a rough introduction to different understandings of integrability. Though the interrelations between these approaches are beyond the scope of this text, we try to show by examples the inadequate sides of these tests. Moreover, we explain the symmetry test in a slightly more detailed way than others but leave its technical definitions and calculations to the next chapter.

Historically speaking conservation laws might be the first test of integrability. A conservation law is a differential equation which is satisfied identically on the solution manifold of a given equation. Let T[«] = 0 be a partial differential equation in one spatial dimension x, where / ’[«] represents dependence of F upon и and arbitrary number of x and t derivatives of u. Then a conservation law is:

D t T - D ^ X = 0, (1.4)

on the solution manifold F[u] = 0, where T and X are functions of и and its deriva­ tives and Dx and Dt mean total derivatives with respect to x and t respectively. A fuller account of conservation laws may be found in [57]. If (1.4) holds then we call T a conserved density and X the corresponding flux. A conservation law is said to be trivial if the conserved density T is a total derivative, i.e. T 6 ImDx. The equation jp[u] = 0 is integrable, in this sense, if it possesses infinitely many conservation laws. Examples of equations integrable by means of 1ST or change of variables, proved that this condition is too strong. Namely, the Burgers equation has only one nontrivial conservation law determined by the density T = и and the flux X = Ux + u'^, though it is linearizable and hence exactly solvable.

Another main test of integrability is the Painleve analysis. Painleve property was originally defined for ordinary differential equations. It says an ordinary differ­ ential equation possesses Painleve property if its general solution is single-valued, except perhaps at fixed critical points. This test was extended to partial differential equations by two different approaches, details of which are not our subject matter.

The first was done by Ablowitz-Ramani-Segur in [2]-[4], and has been known as ARS Painleve test since then. They defined an ordinary differential equation to be of P-type if the only movable singularities of any of its solutions are poles. Then

(19)

they conjectured that every ordinary differential equation obtained as a similarity reduction of an integrable partial differential equation is of P-type, perhaps after a transformation of variables. This conjecture, clearly, gives a necessary condition for integrability and it is worth to note that no counter example is known to it [59]. To illustrate its insufficiency, we can consider the modified Benjamin-Bona-Mahoney equation. The only known non-constant similarity reduction of this equation is the traveling wave reduction and the corresponding ordinary differential equation is of P-type, though this equation is known to be not integrable.

After ARS Painleve test, in 1983 VVeiss-Tabor-Carnevale (VVTC) gave another variant of the test which is directly applicable to partial differential equations [67]. Theoretically, WTC test defines the Painleve property with the same words as the ordinary differential equations. Practically, the algorithm is almost the same as the ARS algorithm, but instead of taking a singular point in ARS, it is necessary to consider a singularity manifold (f) = Q around which the Laurent series expansion is done. The WTC test proved to be a more efficient tool to test integrability because, for instance, the modified Benjamin-Bona-Mahoney equation does not have WTC Painleve property. However, it also has its own deficiencies. The well known Harry- Dym equation:

— Vi ViXXX ,

which is integrable by 1ST, does not pass the WTC test.

Then this test was extended so that it could handle a larger class of equations. This new extended version is called perturbative Painleve test. We shall not consider this test in details, but a complete discussion of it may be found in [14], [22] and [59].

There are other basic tests to underline integrability like Hirota’s bilinear ap­ proach, Hamiltonian approach, etc. But to consider two basic methods shall be enough, and we refer to [40] and [57] for detailed versions of other tests.

Finally, we roughly discuss the symmetry approach to the problem of integra­ bility. Generalized symmetries first appeared in their present form in a paper by Noether [55]. In this paper, she showed the importance of generalized symmetries

(20)

for construction of conservation laws. Prior to her, also Lie and Backlund dealt with symmetries though not in the most general form.

It was shown by Ablowitz-Kaup-Newell-Segur that for a given eigenvalue prob­ lem there exists a hierarchy of infinitely many equations associated with it [1]. This hierarchy is generated by so called the squared eigenfunction operator of the linear eigenvalue problem. The squared eigenfunction operator is also linear. But the group theoretical origins of this hierarchy were established by Olver in 1977 [56]. He showed that finding the hierarchy associated with a given equation is equivalent to finding the generalized symmetries of the given equation. He also interpreted the squared eigenfunction operator as an operator mapping symmetries onto symmetries. This suggested a fine characterization of recursion operators. Hence, Olver became the first to observe that certain integrable nonlinear partial differential equations possess infinitely many generalized symmetries. This observation revealed the existence of infinitely many generalized symmetries as a test for integrability. Since then there have been many attempts to establish algorithms for finding symmetries of differ­ ential equations and to compute recursion operators. Many important properties of recursion operators which let the existence of hierarchies of symmetries, were found. The algebraic structures of these hierarchies were well studied too.

The concept of formal symmetries, which is slightly different from the concept of symmetry, was developed by Shabat and his co-workers to solve certain classification problems. The survey [51] by Mikhailov-Shabat-Sokolov can be read for details of the symmetry approach to classification problem. They classified certain classes of partial differential equations, for instance equations of Burgers and KdV type [51] and the references therein.

It is remarkable to note that there is, up-to now, no counter example to this integrability criterion, i.e. all the known integrable equations possess infinitely many generalized symmetries and vice-versa. Hence, the following conjecture can be taken as a starting point:

C o n je c tu re 1.1 If a system of partial differential equations possesses infinitely many (time independent) generalized symmetries then it is an integrable system.

(21)

All the known examples together with the works of Russian school have suggested another conjecture which simplifies the application of the conjecture ( 1.1) [19]:

C o n je c tu r e 1.2 If a system of N partial differential equations possesses a system of N generalized sxjmmetries then it has infinitely many generalized symmetries.

We have to note that the conjectures above are valid only in 1 + 1 dimensions: one spatial and one time dimension.

We shall stick to the definition of integrability in the sense of symmetry through­ out this work.

1.4

B oundary conditions and integrability

When the 1ST was first applied to the KdV e(]iiation, Gardner-Greene-Kniskal- Miura used suitable boundary conditions. After this first attem pt, many others also employed suitable boundary conditions to find the exact solutions of integrable nonlinear partial differential equations. For so long time suitable boundary conditions ha.ve meant:

dx ■u(x, t) 0 as oo for i = 0, 1, 2,.... (1.5) Using these boundary conditions, many important solutions such as soliton solutions of various equations, have been found.

On the other hand, it is clear that the boundary conditions ( 1.5) do not let us consider the boundary value problems on half lines or bounded intervals. Despite this fact, there have been some attacks to such initial-boundary value problems which showed that if both initial and boundary data are taken arbitrarily, the 1ST loses its power. Then the natural question arises: if the boundary conditions should be non- arbitrary, then what should be the criterion of this non-arbitrariness? This natural question has a natural answer indeed: we have to consider the boundary conditions which are compatible with the integrability property of the differential equation.

(22)

ReciUy, in 1987 Sklyanin demonstrated that there are such classes of boundary con­ ditions for some integrable equations [62]. His work was based upon the j/?-matrix approach to integrability, which is quite different from our viewpoint, but still is a motivation for us to consider such problems.

After the work of Sklyanin, it is natural to think of putting symmetries instead of i?-matrices as a test of integrability and to formulate the problem in terms of symmetries. It is this idea what we try to employ in this work. VVe first give the necessary definitions in a coherent way then formulate the problem in accordance with these definitions and finally give rigorous answers to the question of boundary conditions of an integrable equation. VVe claim that the boundary conditions we present here, are suitable to solve the integrable equations on a half line or a bounded interval, by means of 1ST or finite gap integration, because they are compatible with the integrability of the differential equation.

Before going into the formulation of the problem, we give some technical prelim­ inaries and basic concepts of symmetry approach in the next chapter. These notions are heavily used in the following sections without reminding.

(23)

C h a p te r 2

S y m m e tr ie s o f D iffe r e n tia l E q u a tio n s

In this chapter, we give the basic concepts of symmetry approach in a more general form than it is necessary for us. VVe start with Lie symmetries and continue with generalized symmeti'iQs. After giving these two kinds of symmetries, we finally deal with recursion operators which are of special importance for generating generalized symmetries.

2.1

Lie sym m etries

Consider a system of N differential equations:

= 0, a = (2.1)

where x = (x^, a;^) are the independent and % = ( u ^ , a r e the de-pendent variables and for each a, the equation (2.1) depends upon at most the nth order derivatives of u{x). Here the vector function u : =: yY — U := has to be in the space of (at least) n times differentiable functions, but we shall assume th at it is smooth in the rest of the text. Hence, the vector E = {E\^ E 2·, ·.·, £^/v) can be viewed as a smooth map from the jet space yY X to some N dimensional Euclidean space:

E : X X ^

The system of differential equations (2.1) determines a sub-variety Sg· Se = {(x.ti^"^) : = 0, Va = l ,2, . . . , f v } C X x

(24)

which is, in other words, the solution manifold of the system (2.1).

Let us start by giving the definition of a symmetry group which is obtained by the exponentiation of an infinitesimal symmetry generator. For this section, the book [57] by Olver is a very comprehensive source and we use this reference extensively.

D e fin itio n 2.1 (S y m m e try g ro u p ) /1 symmetry group of the system (2.1) is a local group of transformations G acting on M C X X U with the property that luhenever u{x) G Se whenever g .u is defined for g G G , then g.u[x) = u{x) G Se·

This means the actions of the members of the symmetry group of a system of differ­ ential equations transform solutions of the system to other solutions. On the other liand, if V is an infinitesimal symmetry generator (it is a vector field by definition) of the system (2.1) then the group Gs = exp(ÓV) is a one-parameter symmetry group of the system (2.1). If we can give a criterion to find the infinitesimal symmetry generator, it is going to help to find the symmetry groups.

Let V be a vector field of the form p u\ dx ^' = E № . » ) ¿ + E -á x. « ) ¿ 7 . ¿=1 1=1 (2.2)

Now we can define what an infinitesimal Lie symmetry generator is:

D e fin itio n 2.2 (In fin ite sim a l Lie s y m m e try g e n e r a to r ) If the vector field V (2.2) satisfies:

p r„l/(^,[a:,riW ]) = 0, a = l , 2 , . . . , N , (2..3) whenever G Se, then we call V an infinitesimal Lie symm,etry generator of

the system (2.1).

Here pr„K denotes the nth prolongation of V and it acts on the jet space C A' X and is given by the formula:

pr„K = 1^ +

E I

D

j

Q^

+

^

j

(2.4)

1 --- - ' 1 rri/·. (=1 l<N|<n

where = v' ~ (°rm the 7-tuple Q = ·.·, Q^) which is re­ ferred to as the characteristic of the vector field V [57]. In the above formula

(25)

j = jk G Z 4. U {0} represents all possible ordered multi-indices with I'^l = j \ + + · · · + ip < and D j is the following total derivative operator:

Also we use the abbreviations u'· — du'/dx^ and u )· = duj/dx'^ where:

Uj =

dx^'^dx^^\..dxP^^

The above definition is completely enough to find the infinitesimal Lie symmetry generators of a given system of differential equations. Hence, the following theorem, proof of which can be found in [57], gives the entire characterization of a symmetry group:

T h e o r e m 2.3 Suppose that toe are given a system of differential equations (2.1) defined over M C X X U . If every infinitesimal generator V of a local group of transformations G , acting on M , is an infinitesimal Lie symmetry generator of the given system (2.1), (hen G is a symmetry group of the system.

A characteristic Q = (QS <5^, ···, vector field V is called a Lie symmetry of the system of differential equations (2.1). If we represent in an operator form, we can extract the, so-called, symmetry vector field corresponding to the symmetry Q, i.e.

for all k = 1,2, ...,q. Thus, the operator £ is a symmetry vector field for the system (2.1) [58]. In terms of L, a symmetry has the form:

Ut = {Lv}. Lv?, Lu^), (2.5)

where r is the symmetry evolution parameter and u is a solution of the system (2 .1) {u G Se)·

Now let us give examples to illustrate a major difference between linear (or linearizable) and nonlinear equations.

(26)

E x a m p le 2.4 We consider the Burgers equation as the first example, which is lin- earizable via Cole-Hopf transformation, namely it can be mapped onto heat equation as we have seen. For the sake of introducing a new form of the Burgers equation, we consider the so-called potential Burgers equation which is obtained by defining a new variable v nonlocally, i.e. Vj; = u. After substituting Vj: for it in the Burgers equation ( 1.2) and integrating with respect to x once, we get the potential form of the Burgers equation:

Vi - Va;x - = 0. (2.6)

This form of the Burgers equation is essentially the same as the original version, but as far as the symmetry group calculation is concerned, ])otential form is easier to handle because of its symmetric form. Here, it is clear that there exist two independent and one dependent variables. So we assume a vector field of the form:

V = ^ \ x , t , v ) d x + ^'-(x,t,v)di + Tf(x,t. v)d„, (2.7) where 0- = OjOz for r = x .t, v. We need to find the second prolongation of V which we do not give here explicitly, then we solve the over-determined system of partial differential equations arising from:

pr2V"(i^< - I’xx - V ^ ) = 0,

to find the coefficients of V. After simple computations we find them to be: = Cl + C4X + 2csi + 4cea;i,

= C2 + 2c4i + 4c6f^,

Tj = 4>{x, t)e~'’ + C3 - C5X - 2cei - ce.r^,

where c,·, i = 1, 2,..., 6 are arbitrary constants and (¡>{x, t) is an arbitrary solution of the heat equation: 4>t — (f>xx = 0. From these coefficients we can write the basis of the symmetry group:

(27)

where V¿ are given as:

= 0,.,

V3 = dv,

V4 = xdx + 2if?t, (2.8)

V5 = 2td^ - xOu,

Vr> = Axtdx + At^dt - (a;^ + 2t)d^,,

V4, = (p(x,t)e~''du.

The group generated by B is infinite dimensional which is a property of linear or linearizable equations. But if we consider the finite dimensional reduction of B. it forms a Lie algebra, with respect to the Lie bracket which is given by;

[/I, B]l ^ A B - BA , (2.9)

for two vector fields A and B. Namely, take the finite dimensional reduction B = ^'2) ···· K)} of B. Then it is possible to prove that B is a Lie algebra with respect to the Lie bracket (2.9) by making a commutator table. Moreover, we can write the one-parameter symmetry groups Gi corresponding to each Vi. The entries give the transformed point ( x ,i ,v ) = exp (6Vi)(x,t,v). For instance:

G'l : {x + S,t,v)] space translation. G4 : (e^x,e'^^t,v); space and time scaling,

are the one-parameter symmetry groups corresponding to Vj and V4 respectively. We can also write the corresponding Lie symmetries (2.5) by applying the symmetry vector fields to a solution v (x ,t) of the potential Burgers equation:

Ut, = Vrs = ^T6 % = dxV -- Vx, dtv = vt, dyV = 1, x{dx A- 2tdt)v = xvx - f 2tvt, -(2tdx + xdy)v = -2tVx - X,

-{Axtdx + At'^dt + (x^ + 2t)dy)v = -A xiv^ - At'^vi - x'^ - 2i,

{(j){x,t)e~'’dy)v - (¡){x,t)e~'',

(28)

where ti, i = 1,2, are symmetry evolution parameters. Another important

property of the Lie symmetries is that each Vr^ and Vt (the potential Burgers equation itself) are in involution, i.e.

^Tit ~ '^tTi — ~ 1, 2, , (j).

where Vt = v^:x + uj. We have to note that this property turns out to be a universal property of all kinds of symmetries, namely they commute with the original equation. A more detailed version of these calculations can be found in [57].

After a linearizable equation, we give the KdV equation as an example of nonlinear ecjuations. We give its Lie symmetry vector fields, Lie symmetries and one-parameter symmetry groups without any calculation. Related computations can be seen in [19] and [57].

E x a m p le 2.5 In many respects, the KdV equation is a key equation. It is the first equation to be shown to have soliton solutions and also it appears in many other disciplines. Hence, it is natural to consider this equation here. We take the KdV e(|uation in the following form:

= ut - Xixxx - Guux - 0, (2.11) which can be transformed to other forms by simple scalings. W^e, again, have two independent and one dependent variables. Thus, the vector field V has the form of the potential Burgers equation vector field (2.7). Main difference from the potential Burgers equation is the order of prolongation:

pr2V(i/-^

y^xxx

bix'Ua;) — 0, (2.12)

gives us the over-determined system of equations for Computations for the KdV equation is much more complicated than that of potential Burgers equation since we need to use the third prolongation of V, But even here, it does not take us too long to work out the partial differential equations arising from (2.12). In the end, coefficients of V turn out to be:

= Cl - 6c3i + C43;,

= C2 + 3C4i, 7] = C z - 2C4U,

(29)

which imply the vector fields satisfying the invariance condition (2.12) have the basis: B = {V u V 2,V^,V,} where (2.1.3) = d,, V2 = du V3 = -Qtd^ + du, V4 — xdx + Stdt - '2ud„..

It is easy to check that .S is a finite dimensional Lie algebra with respect to the Lie bracket (2.9). The one-parameter symmetry groups Gi with elements exp(¿'V¿) generate new solutions from known solutions u (x ,t)· The entries of the transformed point ( x , i , u ) are given by:

G\ : ( X' “F t , ÍÍ ), space translation 6?2 · {x, t + S,u)· time translation, G,3 : (.7; - ()6t, t, u -F S): Galilean boost,

G4 i (f/ x , scaling.

From the explicit forms of symmetry generators (2.13) we can easily find the Lie symmetries of the KdV equation by using the formula (2.5):

uT2 u.T3 = d^u = Ux, = dtu = ut, = (6tdx -F du)u = 6tUx -F 1, (2.14)

- - { x d x Z i d t - \ - 2 u d y ) u = — xux - 2>tut — 2u,

with the new evolution parameters i = 1, ...,4. The Lie symmetries (2.14) of the KdV equation and the KdV equation Uj = Uxxx -F 6uux are in involution like the potential Burgers equation:

t ~ 0; = 1, ..., 4.

The number of the Lie symmetries of such a nice equation may seem small but further symmetry properties, reflecting the existence of infinitely many conservation laws and .soliton solutions, will require development of the theory of generalized symmetries.

(30)

From the theory and examples of Lie symmetries, it shall be clear that these kind of symmetries have a group-theoretical meaning: they form a group of local transformations acting on a manifold M C X X U and map solutions onto other solutions and a geometrical meaning: they shift the space, time or solution, shrink or expand them, etc. Another im portant observation is the symmetry groups of linear or linearizable equations are infinite dimensional, whilst genuine nonlinear equations have finite dimensional symmetry groups.

2.2

G eneralized sym m etries

III order to work out the generalized symmetries of a given system of differential equations in a comprehensive way, let us generalize some of our notations first. From now on we denote a system of iV partial differential equations by:

Ea[u] = 0, a = 1 ,2. .N. (2.15)

in.stead of (2.1). The above notation (2.15) means for each 1 < a < N , Ea[a] c A, wliere A is the space of differential functions, is a differential function depending upon the independent variables x e X , and the dependent variables м e U and their derivatives u j G up-to some finite but unspecified order n. Hence, if A[u] and are two different differential functions, they may depend upon the derivatives of и up-to different orders, though we use the same notation for them. Equipped with this new and useful notation, we can define the generalized vector fields V by the expression: P /9 ñ ‘' = E í i “t e + E ' í ' i » ' г=1 i = l rs ; 9 ou' (2.16)

in which and 77' are smooth differential functions. The prolongation of generalized vector fields can be achieved with the previous prolongation formula (2.4), but we now introduce the notion of infinite prolongation by rela.xing the \ J\ < n condition in (2.4) and denote it by prF . This, of course, leads to an infinite sum in the formula of prK, i.e.

i / P \ d

prV = V + DjQ^ +

i=\ 1<И \ J=1 д п у

(31)

where all other settings, including the characteristic Q, remain unaltered. But this shall not cause any trouble of convergence, since only a finite number of elements of this summation survive when acting on a differential function A[u] and this finite number is completely determined by the order of A[u],

After giving the concept of a generalized vector field and its infinite prolongation, we can now define what an infinitesimal symmetry generator is [57]:

D e fin itio n 2.6 (In fin ite sim a l s y m m e try g e n e r a to r ) A generalized v ext or field V (2.16) is o.n infi/nitesimal syrnmetry generator of a system of differential equations (2.15) if and only if:

\)YV(Ea[u]) = 0, a = 1, 2,..., A, (2.18) for every smooth solution u{x) of (2.15).

It is clear th a t the above definition is a direct analogue of the infinitesimal Lie symmetry generator definition.

To every generalized vector field (2.16), we can associate a vector field Vq of the form:

0 t =l

— m l n 2

(2.19)

where the differential function Q = (Q ^ <5^, ···, <5’) is the characteristic of V with: Q‘[w] = r?‘[u] - i = 1, 2,. . . , 7,

j=i

where -íí*· = du'’¡dx^ We call this associated vector field Vq an evolutionary vector

field. The im portant remark is that these two vector fields V and Vq generate the

same generalized symmetry Q [57]:

P r o p o s itio n 2.7 A generalized vector field V is an infinitesimal symmetry genera­ tor of the .system (2.15) if and only if so is the evolutionary vector field Vq.

P r o o f The infinite prolongation of Vq is:

" ~ d

i= n < |J | '' J

(32)

Then using (2.17). we calculate p rF — prVg which turns out to be:

p r K - ,,r K « = +

j = l \ i=l IJI " “j

12

eo,,·

i=l

This suffices to prove that:

\)rV{Ea[u]) = prVQ(i;,>,[«]), (2.21) since D^^jErx[u] = 0 for all a = l,2 ,...,iV on the solution manifold of the

system (2.15). □

This theorem simplifies the computation of symmetries. Namely, we assume an evolutionary vector field Vq for a given system of partial differential equations (2.15)

and apply the criterion:

in V Q { E M ) = ^^ a = 1,2,...,:V.

to find the coefficients Q' in Vq. These in turn, form a symmetry:

(

2

.

22

)

of the given system (2.15). For instance, if we assume Q^[u] is a differential function of order, at most, n, for each i = l , 2,...,g , then we can find and hence classify all the (Lie and generalized) symmetries of order less than or equal to n, of the system (2.15).

Before going into some examples, let us make an equivalent formulation of the criterion (2.22). This new formulation is related to the linearization of nonlinear of differential equations.

D e fin itio n 2.8 (F re c h e t d e riv a tiv e ) Let P[u] be an r-tuple of differential func­ tions and Q[u] be a q-tuple of differential functions (i.e. P[u] G and Q[u] G A^). Then the Frechet derivative of P is the linear differential operator Dp[u] : A^ A'^ so that

Dp['ii](C) =

£=0

(33)

In other words, to calculate the Frechet derivative of an r-tuple of differential func­ tions P[u], we simply substitute each by u[j-\-tD.jQ'-[u] in ,P[«], where J represents all possible ordered multi-indices as before.

From the definition (2.23) it is clear th at Dpf-«] is an r x q matrix differential operator with entries [57]:

d P°‘

{^p)a0 = Y ^ - ; ^ D j , a = l ,2,...,r, = 1, 2,..., 7.

|J| du[j (2.24)

Clearly the operator Dp[u] of P[u] depends upon u and its derivatives but for the sake of brevity, we write Dp henceforth for the Frechet derivative, unless it is vital. Now we can establish the intimate relation between the Frechet derivative and the evolutionary vector fields by the following proposition [57]:

P r o p o s itio n 2.9 If P G and Q G , then

Dp ( Q ) = pvVq(P). (2.25) P r o o f Follows from the comparison of prolongation formula (2.20J and the entries

of Dp (2.24). □

We employ the relation between the Frechet derivative and the evolutionary vector fields to reformulate the generalized invariance criterion (2.22).

L e m m a 2.10 Let Q[u] he a q-tuple of differential functions. Then Ur = Q[u] is a symmetry of the system of differential equations (2.15) if and only if

De{Q) = 0, (2.26)

luhenever = 0, luhere E[u] = (Pi[u], P 2M) ··■, Pn'M) is an N -tuple of differential functions representing the system of differential equations (2.15).

It is worth to note that since D^; is a linear differential operator, the resulting in­ variance criterion (2.26) is a system of linear partial differential equations for the .symmetry Q[u]. This has a twofold meaning: 1. Symmetries of a system of dif­ ferential equations are the solutions of the linearization (in the Frechet derivative sense) of the same system. 2. Linear combinations of symmetries (of various orders)

(34)

a,re again symmetries of the system which are referred to as the non-hom.ogeneous symmetries.

Now we give some examples of differential equations possessing generalized sym­ metries. We check the existence of such symmetries via the criterion (2.26).

E x a m p le 2.11 Our first example is again the potential Burgers equation:

E[u] = ut - Uxx - III = 0. (2.27) Our aim is to find the symmetries of this equation of the form:

Ut = Q[lt].

where Q[u] is the characteristic of the evolutionary vector field T = Q[u]du. For simplicity of the iUustration we seek characteristics of the form:

Q[ii] = Q(x. i, Uf. u-xx, '¡¿x-i)·

But from the original equation we can substitute Uxx + u l for ti, which reduces Q[u] to the following form;

= Q(^x,t,u, Ux, Uxx, Uxxx).

With this form of Q, we can solve the differential equations arising from: DE(g) = o,

where the differential operator D£; is given by: Dg = i?i — — 2uxDx.

(35)

(2.28) This over-determined system of equations gives rise to the following eleven charac­ teristics up-to multiplication by a constant [57]:

Qo = 1, Q l — ) Q2 — 2tUx -f Q3 = + Q — 2t{^Uxx ^^x^ T Qr^ = ^[t^{uxx -f + AxtUx + 2t -f x ^. Qg “ '^^XXX T ’^'^^X^^XX T '^^X') Q l 2t { U x x x -f ’i ' l t x U x x + U ^ ) -|- x ( ‘Uxx

H-Qs = ^^t^(uxxx + riUxUxx + u^.) -|- 4xt('iLxx + iL^,) -j- {2t -f :r

Q g = S t ^ i ' Ux x x + SuxUxx + ul) -f L2xf{ Uxx ul) + -f dx^'t)ux

+9xt + x^,

Q<f> = (p{x,tyr^,

seven of which generate the Lie symmetries (2.10) as we found in the previous sec­ tion. The last characteristic is, in fact, an infinite family of characteristics since o (x A ) is any smooth solution to the heat equation. The remaining four characteris­ tics Qe^Qr^Qs^Qd generate the third order generalized symmetries of the potential Burgers equation. The generalized symmetry Ur^ = Qq is of special importance since it is x and t independent. VVe can state that any symmetry of the potential Burgers equation (2.27) that is of order less than or equal to three, can be obtained by some linear combination of the characteristics Qi^ i = 0, 1, 2, ...,9,^.

This easy example teaches us a very useful fact which simplifies the computations. A system of differential equations is said to be of evolution type if:

Ut = A'[w], (2.29)

where t , x = (x^,x^, are p independent and u = (w*,гí^...u'‘) are q de­ pendent variables and K[u] = (A'i[u], A'2[ tt] ,A ',[ r i] ) depend upon t , x , u and the .r-derivatives of u only. This means we can isolate the evolution of u with respect to one of the independent variables, say t, and put it on the left-hand side being alone and hence simplify the equations in the closed form E[u\ = 0.

(36)

In this case, a characteristic Q[u], which leads to the symmetry Ur = Q[il] of the system (2.29), may be assumed in a way th at it depends only upon t,x , u and the ^-derivatives of since we can replace the ¿-derivatives of a by the right-hand side

/v'[a] of (2.29). More explicitly, we can make substitutions like: Ut = K[u],

= D^,K[u], 1,

= D^,.D^jK[u], i j = 1, 2, 1,

(2.30)

in every place of their occurrences. Tlie.se substitutions (2.30) are nothing but differ­ ential consequences of the identity determined by the original evolutionary system (2.29).

Another example of such equations is the KdV equation which is our next exam­ ple.

E x a m p le 2.12 The KdV equation has the form;

E['ix] — 'Ml it-XXX iyiiMx — 0, (2.31)

as we mentioned in the previous section. It is clear from (2.31) that the KdV equation is an evolution equation, i.e. it is:

Ml — Mxxx ^MMx> (2.32)

For simplicity, let us look for the (up-to) fifth order, x and t independent symmetries of the KdV equation (2.32). Hence, assume a characteristic of the form:

Q[w] = Qi'^i ···) '^xxxxx')) to substitute in the linear partial differential equation:

Ds(<5) = 0,

where D^; is the Frechet derivative of E[m] (2.31) and is given by: Dj5 = jD( — — 6 uDx — 6mx·

(37)

In order to solve the differential equations arising from the invariance criterion (2.33) for the KdV equation, we equate the coefficients of d^u/dx^^ k > (} to zero since Q depends upon the .^-derivatives of u up-to order five. These coefficients are not enough to find Q completely but determine the dependence of Q upon, at least, d'^u/dx^ which allows us to equate the coefficient of the fifth ^.'-derivative of u to zero, so on and so forth. In this sense we consider d^u /d x^ as inde])endent variables for all A: > 0, in a way that the infinite jet space is the space in which the independent variables lie. At the end of this recursive solution method, the possible Qs, which satisfy (2.33), turn out to be:

Q 0 — ?

Q\ = (2*34)

Q2 — '^I'xxxxx "1“ 10 Í Í ”1” 30 ÍÍ ^x’1

which are all homogeneous differential functions. Here, we can note that the char­ acteristics Qq and Q\ represent two of the Lie symmetries of the KdV equation and that there are no generalized symmetries of order four for this equation. This fact is a hint of nonexistence of generalized symmetries of the KdV equation at certain orders, yet it has infinitely many of them. One lesson to be drawn is that if we think of the generalized symmetries as an infinite hierarchy (for integrable systems), it might be a hierarchy with lacunae and it really is for most of the integrable systems. In this respect, Urj = Q2 is th® first generalized symmetry of the KdV equation and the second one is of order seven.

Our last example is the AKNS-ZS (Ablowitz-Kaup-Newell-Segur, Zakharov- Shabat) system just to illustrate the situation in the case of more than one equations.

E x a m p l e 2.13 The AKNS-ZS system is an integrable system of evolution type of

the form:

E\[u,v] = ut - Uxx - 2u^v = 0, E 2[it.,v] = vt + Vxx + 2uv'^ = 0,

which is equivalent to the famous nonlinear Schrödinger equation:

— '^xx T

(38)

under the coordinate transformations v = u* and t = ¿i, where is the complex conjugation operation.

VVe search for the third order, x and ¿-independent generalized symmetries of the AKNS-ZS system (2.35), if any. To this end, we take a characteristic of the form:

Q[U] = {qHu, u,., t / ,,, t / ,,,) , Q \ u , t/,, t / ,,, f/,.,,.)), where U = {u. v) and suppose that:

dQ^ dQ^

7^ 0, /; = i , 2, dvX X X

not to lose the dependence upon the third derivatives. For the invariance we need to calculate the Frechet derivative of the system (2.35). As we mentioned previously, the Frechet derivative is matrix differential operator with entries (2.24). In the case of AKNS-ZS system, it turns out to be:

=

Dt - D l - 4uv 2u^

-2u^

Dl -h D l + 4uv

The only homogeneous solution to the invariance criterion Üe{Q) = 0 is the charac­ teristic Q = with:

— Xlxxx -f- QuvUx^ Q'^ = V x x x + Q u v V x .

Hence the third order, x and i-independent generalized symmetry of the AKNS-ZS system (2.35) is:

Ur = Q,

which is the first member of the infinite hierarchy of generalized symmetries.

After these illustrative examples, let us explain why a system of evolution equa­ tions and its symmetries commute. Consider an evolutionary system (2.29):

and a symmetry of this equation:

ut = 7\'[w],

(39)

Then, being a symmetry, Q satisfies the linear differential equations: De((?) = 0,

on the solution manifold of Ut = K , where E[u] = Ut - This immediately entails:

De - E>t- De , and then the invariance condition becomes:

DtiQ) - De(Q) = 0. (2.36)

If we compute D/v'fQ) explicitly from the definition, we can establish an equivalent form of it, i.e.

Dk(Q) = ¿=1 |J| n <•=1 |./| n i = i Z i Z ^ ^ r u j ¿=1 |J| = Dr{K).

Thus, the system of differential equations given by (2.36) is equivalent to the com­ m utation relation:

Dt{Q) - Dr{K) = 0, (2.37)

which is also equivalent to saying:

'U/Tt ^¿T — b,

by substituting Ut = K and Ur = Q- On the other hand, by reversing the process, we can prove that the invariance condition (2.36) is equivalent to:

Dq{K) - Da-(Q) = 0. (2.38)

So, for a given evolutionary system of differential equations Ut = K[u], if the char­ acteristic Q[u] satisfies either of the criteria (2.26), (2.37), (2.38) on the solution

(40)

manifold, then Ur = Q[u] is a symmetry of the system. From the symmetric form of these criteria, we can also view the initial system Ut = K as a symmetry of = Q .

Bearing in mind that there is a commutation relation between a system of dif­ ferential equations and its symmetries, we can naturally ask for a similar relation between two functionally independent symmetries of a given system of differential equations E[u] = 0, where E = (£^i, jE'2, This kind of an investigation, ob­ viously, leads to an algebraic characterization of symmetries. We have the following theorem:

T h e o re m 2.14 Suppose that

Utj^ — Qn and '^Tm ~ Qm·)

are txuo functionally independent symmetries of a system of partial differential equa­ tions Ea[y] = 0, a = 1 ,2,...,(7. Then the differential function determined by the commutator of these tiuo symmetries:

Q r n n — D r „ Q m ^ T r r i Q n ' ,

generates another sxjmrnetry

U r^^

= Qnm oj the same system.

(2.39)

P r o o f Since Ur^ = Qn and Ur,^ = Qm are symmetries of the system E[xl] = 0, they both satisfy the invariance condition (2.26):

Oe{Qu) = 0 and DsiQm) = 0. (2.40)

In order to show th at Qnm generates a symmetry of E[u] = 0, we have to prove:

DsiBr^Qm - Dr^Qn) = 0. (2.41)

To this end, let us first check the commutators:

[De, Dm] = · Dm ~ Dm * D^;, [D^;, i?Tm] ~ ^E * Dj^ — Drm * ^Ei

which play a key role for the proof. Straightforwardly calculating the commutators and rearranging the terms and the multi-indices, we find them to be:

d^E 1,(3 |J1,|A'| d u j d u \ '

(41)

p

IDe, D , J = E E - T T ^ D i d Q - ' J D j , (2.43) where J and K are two different ordered multi-indices running over all derivatives. Clearly these two pairs of operators do not commute in general. Nonetheless, we have the identity:

[^E,DrJ{Qm) - [^E,DrJi{Qn) = 0,

since the right-hand sides of the expressions (2.42) and (2.43) are symmetric with respect to the multi-indices J , K and the superscripts 7 ,/3. This last identity imme­

diately entails the required identity (2.41). □

This theorem gives a nice algebraic characterization of the generalized symme­ tries of a system of differential equations. The following proposition is a direct consequence of the theorem above [57]:

P r o p o s itio n 2.15 The set of generalized symmetries of a system of differential equations forums a Lie algebra.

For most of the integrable systems, the characteristic Q^m (2.39) is zero for all n ,m , which implies that all generalized symmetries are in involution. Such hierarchies of generalized symmetries are called commuting hierarchies or commuting flows. For instance, the generalized symmetry hierarchies of the KdV and the Burgers equations are commuting hierarchies. Although there are very many examples of such systems, it is not true that all integrable systems possess an infinite commuting hierarchy of generalized symmetries.

The main difference between Lie and generalized symmetries that lets us distin­ guish between them, is the geometrical interpretation of the Lie symmetries which the latter ones fail to have. This is because the coefficients ^ and rj of the vector field V might possess derivatives of u m the case of generalized symmetries. Although we lose the geometrical description of the transformation when we extend Lie symme­ tries to generalized symmetries, we still have the algebraic characterization.

(42)

2.3

R ecursion operators

In the previous section, we have seen how to find the generalized symmetries of a given system of differential equations. This method is a systematic one but fails to characterize an infinite hierarchy of symmetries. It is obligatory to find each sym­ metry separately which requires an assumption of the dependences of the symmetry upon u and its derivatives. In order to explore an infinite hierarchy (not all) symme­ tries (if exists) for a given system, we introduce the notion of a recursion operator. A recursion operator, if any, generates infinitely many generalized symmetries at once and in this sense, its existence can be considered as an alternative definition of integrability. Unfortunately, the process of finding a recursion operator of a given system of differential equations requires a considerable amount of guesswork, while it is straightforward to check whether or not a given operator is a recursion operator.

D e fin itio n 2.16 (R e c u rsio n o p e ra to r) Let E[u] = 0 be a system of q differential equations. A recursion operator for E[u] = 0 is a linear matrix operator R[m] : A'' A'' in the space of q-tuples of differential functions with the property that whenever Ur,, = Qn Is a symmetry of the system, so is U r„ ^ , = Qn+i ■- R N iQ n)·

A recursion operator for the system E[u] = 0 clearly depends upon u and its deriva­ tives and hence is denoted by R[n]. But we suppress these dependencies in the notation without any remark whenever we find it unnecessary and write R instead. From the definition it is clear that recursion operators map symmetries onto sym­ metries. Then, clearly there exists an infinite hierarchy of generalized symmetries, which is guaranteed by the recursion operator. For example, if we have got a symme­ try Uro = Qo, then by applying R successively to Qq we can generate new symmetries,

i.e. for every n > 0, = R"<3o is again a symmetry. We have the following theorem to calculate the recursion operators [57]:

T h e o re m 2.17 Suppose E[u\ — is a system of q differential equations. If R : A ‘> —>■ A'^ is a linear differential operator such that:

(43)

on the solution manifold, where R : is a linear differential operator, then R is a recursion operator for the system.

P r o o f Start with a symmetry Ur = Q oi the system E[u] - 0. Then we showed th at De{Q) = 0 whenever E[u] - 0. To prove the theorem, we have to check that Db(R(5) = 0 on the solution manifold. From the identity (2.44) and De(Q) = 0, we have:

Db(RQ) = R(DsQ) = 0,

on the solution manifold, which completes the proof.

This criterion for recursion operators can be reduced remarkably in case of evolu­ tionary systems. If the system of equations is given by E[u] — ut - = 0, where E [u] G , rewriting the criterion (2.44) yields:

D/v · R — · R = R · D/y — R · Dt, which, in turn, implies R = R since we have:

· R = R · jDi -b Ri where Rj = D j K. dR duj

The remaining terms can be arranged in a way that a recursion operator R for an evolutionary system satisfies the commutator condition:

Rt = [DK,R]. (2.45)

This is equivalent to vanishing of (1,1) Lie derivative of R with respect to the gen­ eralized vector field Vj{ corresponding to K:

Cv„-{R) = 0.

Recursion operators, in general, are pseudo-differential linear operators, i.e. they might contain integration operators with respect to some independent variables. For instance, the operator is an integration operator with respect to x and defined through:

Referanslar

Benzer Belgeler

Based on our nonparametric regression results, we have also found very strong evidence of herd behaviour in A-type stocks, where only local investors can trade. These results

Females are better at correcting some errors (2, 8, 14 and 18) and males are better at correcting some errors (15, 17 and 19), there is not a meaningful explanation for this as

As reßected in the dis- trict of Amasya, where the malikane-divani type of revenue-holding system was in force, Mehmed II’s reform seems to have been rather superÞcial; most of

We present a novel dual-operation InGaN/GaN based quantum optoelectronic device (QOD) that operates as a quantum electroabsorption modulator in reverse bias and as a light emitter

Recently, new existence results for nonlinear fractional differential equations with three-point integral boundary conditions are obtained in [39], existence of

Baseline scores on the QLQ-C30 functioning scales from patients in both treat- ment arms were comparable to available reference values for patients with ES-SCLC; however, baseline

Þekil 8'de bu kontrol algoritmasý ile daha hýzlý referans yörüngeleri için sistemden elde edilen konum-zaman, hýz- zaman, basýnç farký-zaman ve kumanda-zaman grafikleri

In the construction of the efficient frontier daily price data from Istanbul Securities Exchange Market's First Market stocks during January 1 1990 - January 1 1991