• Sonuç bulunamadı

Başlık: Quasilinearization method in causal differential equations with initial time differenceYazar(lar):YAKAR, CoşkunCilt: 63 Sayı: 1 Sayfa: 055-071 DOI: 10.1501/Commua1_0000000705 Yayın Tarihi: 2014 PDF

N/A
N/A
Protected

Academic year: 2021

Share "Başlık: Quasilinearization method in causal differential equations with initial time differenceYazar(lar):YAKAR, CoşkunCilt: 63 Sayı: 1 Sayfa: 055-071 DOI: 10.1501/Commua1_0000000705 Yayın Tarihi: 2014 PDF"

Copied!
17
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

IS S N 1 3 0 3 –5 9 9 1

QUASILINEARIZATION METHOD IN CAUSAL DIFFERENTIAL EQUATIONS WITH INITIAL TIME DIFFERENCE

CO¸SKUN YAKAR

Abstract. In this paper, the method of the quasilinearization technique in causal

di¤erential equations is applied to obtain upper and lower sequences with initial time di¤erence in terms of the solutions of the linear causal di¤erential equations that start at di¤erent initial times. It is also shown that these sequences converge to the unique solution of the nonlinear equation in causal di¤erential equations uniformly and superlinearly.

1. Introduction

The most important applications of the quasilinearization method in causal di¤erential equations [5] has been to obtain a sequence of lower and upper bounds which are the solutions of linear causal di¤erential equations that converge super-linearly. As a result, the method has been popular in applied areas. However, the convexity assumption that is demanded by the method of quasilinearization has been a stumbling block for further development of the theory. Recently, this method has been generalized, re…ned and extended in several directions so as to be applicable to a much larger class of nonlinear problems by not demanding convexity property. Moreover, other possibilities that have been explored make the method of generalized quasilinearization universally useful in applications [7]. In the inves-tigation of initial value problems of causal di¤erential equations [5], we have been partial to initial time all along in the sense that we only perturb the space variable and keep the initial time unchanged. However, it appears important to vary the initial time as well because it is impossible not to make errors in the starting time [4, 6, 7, 8, 9, 10, 11, 12, 13]. Recently, the investigations of initial value problems of causal di¤erential equations where the initial time changes with each solution

Received by the editors Feb. 06, 2014, Accepted: May 21, 2014.

2010 Mathematics Subject Classi…cation. (MOS) 34A12, 34A45, 34C11.

Key words and phrases. Causal di¤erential equations, initial time di¤erence, quasilineariza-tion, superlinear convergence, inequalities.

c 2 0 1 4 A n ka ra U n ive rsity

(2)

in addition to the change of spatial variable have been initiated [1, 13] and some results on the comparison theorems, global existence, the method of variation of parameters, the method of lower and upper solutions and the method of monotone iterative techniques [3, 4, 7, 8, 11] have been obtained.

In this paper, the generalized quasilinearization technique in causal di¤eren-tial equations is used to obtain upper and lower sequences in terms of the solutions of linear causal di¤erential equations that start at di¤erent initial times and bound the solutions of a given nonlinear causal di¤erential equation [5]. It is also shown that these sequences converge to the unique solution of the nonlinear equation uniformly and superlinearly.

2. Preliminaries

In this section, we state some fundamental de…nitions and useful theorems for the future reference to prove the main result. First one is comparison result, the second one is existence result in terms of the upper and lower solutions with initial time di¤erence.

An operator N : E ! E; E = C [J; Rn] is said to be causal operator if, for any

x; y 2 E such that x (s) = y(s); we have N(x) (s) = N(y) (s) for t0 s < t0+ T:

Let us consider the casual functional equation x (t) = (N x) (t) ; x(t0) = x0

where the causal operator N : E ! E is continuous and x(t0) = x0 for t0 0

denotes the initial value for any x 2 E:

Let 0, 02 C1[J; R] with 0(t) 0(t) on J = [t0; t0+ T ]; t0; T 2 R+ and

= fu 2 E : 0(t) u 0(t) ; t 2 Jg :

We consider the following initial value problem for casual di¤erential equation u0(t) = (N u) (t) ; u(t0) = u0 for t t0 (2.1)

where N : E ! E continuous causal operator, E = C [J; R] for J = [t0; t0+ T ];

t0; T 2 R+ and E:

De…nition 2.1: 0; 0 2 C1[J; E] are said to be the natural lower and upper

solutions of (2.1), respectively, if the the following inequalities are satis…ed

0

0 (N 0) (t) ; 0(t0) u0 for t t0 (2.2)

0

0 (N 0) (t) ; 0(t0) u0 for t t0 (2.3)

respectively.

De…nition 2.2: 0; 0 2 C1[J; E] are said to be the coupled lower and upper

solutions of (2.1), respectively, if the the following inequalities are satis…ed

0

0 (N 0) (t) ; 0(t0) u0 for t t0 (2.4)

0

(3)

respectively.

De…nition 2.3: N : E ! E is said to be semi nondecreasing in t for each x if (N x) (t1) = (N y) (t1) and (N x) (t) (N y) (t) ; t0 t < t1< T + t0 (2.6)

for

x (t1) = y (t1) ; x (t) < y (t) ; t0 t < t1< T + t0: (2.7)

De…nition 2.4: Let N 2 C [J E; E] : At x 2 E

(N (x + h)) (t) = (N x)(t) + L (x; h) (t) + khk (x; h) (t) (2.8) where limkhk !0k (x; h) (t)k = 0 and L(x; ) (t) is a linear operator. L(x; h) (t) is said to be Fréchet derivative of N at x with the increment h for the remainder

(x; h) (t) :

3. Causal Functional Inequalities

We give some basic results in causal functional inequalities for the scalar case as follows [5].

Theorem 3.1: Assume that

(i) N : E ! E is a continuous causal operator, E = C [J; R] for J = [t0; t0+ T ];

t0; T 2 R+ and let 0; 02 E satisfy

0 < (N 0) (t) for t0 t T + t0 (3.1)

0 (N 0) (t) for T + t0 t t0: (3.2)

(ii) N is semi nondecreasing, i.e.

x (t1) = y (t1) ; x (t) < y (t) ; t0 t < t1< T + t0 implies (N x) (t1) = (N y) (t1) and (N x) (t) (N y) (t) ; t0 t < t1< T + t0: Then 0(t) < 0(t) for t0 t T + t0 (3.3) provided 0(t0) < 0(t0) : (3.4)

Proof [5]: Suppose that the conclusion (3.3) of the theorem is not true and

0 < (N 0) (t) : Then because of the continuity of the functions and (3.4); there

would exist a t1> t0 such that

0(t1) = 0(t1) and 0(t) < 0(t) for t0 t < t1< T + t0: (3.5)

(4)

0(t1) < (N 0) (t1) (N 0) (t1) 0(t1) :

This is a contradiction since 0(t1) = 0(t1) : Therefore, it proves the claim (3.3).

Theorem 3.2: Assume that

(i) N : E ! E is a continuous causal operator, E = C [J; R] for J = [t0; t0+ T ];

t0; T 2 R+ and let 0; 02 E satisfy

0 (N 0) (t) for t0 t T + t0 (3.6)

0 > (N 0) (t) for T + t0 t t0: (3.7)

(ii) N is semi nondecreasing.

Then the conclusion of the Theorem 3.1 remains the same.

Proof [5]: Suppose that the conclusion of the theorem is not true and 0 > (N 0) (t) : Then because of the continuity of the functions and such that (3:4) is satis…ed; there would exist a t1> t0such that (3.5 ) is satis…ed. Since N is assumed

to be semi nondecreasing, we have

0(t1) (N 0) (t1) (N 0) (t1) < 0(t1) :

This is a contradiction again since 0(t1) = 0(t1) : Therefore, it proves the claim

(3.3).

First we state the following theorem and prove it. Theorem 3.3: Assume that

(i) N : E ! E is a continuous causal operator, E = C [J; R] for J = [t0; t0+ T ];

t0; T 2 R+ and let 0; 02 E satisfy

0 < (N 0) (t) for t0 t T + t0 (3.8)

0 (N 0) (t) for T + t0 t t0: (3.9)

or

0 (N 0) (t) for t0 t T + t0 (3.10)

0 > (N 0) (t) for T + t0 t t0: (3.11)

(ii) N is semi nondecreasing.

(iii) (N x) (t) (N y) (t) L maxt0 s t[x(s) y(s)] whenever x(s) y(s) for

t0 s t and 0 < L < 1:

Then

(5)

provided

0(t0) 0(t0) : (3.13)

Proof [5]: Let us de…ne 0"(t) = 0(t) + " where " > 0 is arbitrary small. Then we have

0"(t0) = 0(t0) + " 0(t0) + " > 0(t0)

and

0"(t) 0(t) for t0 t t0+ T:

Now, by using the one sided Lipschitz condition in (iii), we get

0"(t) = 0(t) + " (N 0) (t) + " (N 0") (t) L" + " > (N 0") (t)

for t0 t t0+ T; since 0 < L < 1: Now by applying Theorem 3.2 with (3.10) and

(3.11) to 0(t) and 0"(t); we …nd that

0(t) < 0"(t) for t0 t t0+ T: (3.14)

Since " > 0 is arbitrary small, by taking " ! 0 in (3.14); we get

0(t) 0(t) for t0 t t0+ T:

Therefore this completes the proof.

The proof of Theorem 3.3 can also be done by using (3.8), (3.9) and Theorem 3.1.

4. Causal Differential Inequalities

We give a basic result in causal di¤erential inequalities for the scalar case as follows.

Theorem 4.1: Assume that

(i) 0; 0 2 C1[J; R] and N : E ! E is a continuous causal operator, E =

C [J; R] for J = [t0; t0+ T ]; t0; T 2 R+and let 0; 02 E satisfy 0 0 < (N 0) (t) for t0 t T + t0 (4.1) 0 0 (N 0) (t) for T + t0 t t0: (4.2) or 0 0 (N 0) (t) for t0 t T + t0 (4.3) 0 0 > (N 0) (t) for T + t0 t t0: (4.4)

(ii) the causal operator N is semi nondecreasing. Then

0(t0) < 0(t0)

implies

(6)

Proof : Suppose that the conclusion (4.5) of Theorem 4.1 is false and 0 0(t) <

(N 0) (t) : Then the continuity of the 0(t) ; 0(t) and the fact that 0(t0) < 0(t0) yield that there exists a t1> t0 such that

0(t1) = 0(t1) ; 0(t) < 0(t) for t0 t < t1: (4.6)

The semi nondecreasing nature of N and (4.6) give

(N 0) (t1) (N 0) (t1) : (4.7)

In view of (4.6), we get for small h > 0;

0(t1 h) 0(t1) < 0(t1 h) 0(t1)

and hence (4.1) and (4.7) show that

(N 0) (t1) (N 0) (t1) 00(t1) 00(t1) < (N 0) (t1) :

This is a contradiction and therefore the claim (4.5) is valid. The proof is complete. The proof of Theorem 4.1 can also be done by using (4.3) and (4.4).

As before, for nonstrict di¤erential inequalities, we require a one-sided Lipschitz condition.

Theorem 4.2: Assume that

(i) 0; 0 2 C1[J; R] and N : E ! E is a continuous causal operator, E =

C [J; R] for J = [t0; t0+ T ]; t0; T 2 R+and let 0; 02 E satisfy 0 0 < (N 0) (t) for t0 t T + t0 (4.8) 0 0 (N 0) (t) for T + t0 t t0: (4.9) or 0 0 (N 0) (t) for t0 t T + t0 (4.10) 0 0 > (N 0) (t) for T + t0 t t0: (4.11)

(ii) N is semi nondecreasing.

(iii) (N x) (t) (N y) (t) L maxt0 s t[x(s) y(s)] whenever x(s) y(s) for

t0 s t and 0 < L < 1:

Then

0(t0) 0(t0) (4.12)

implies

0(t) 0(t) for t0 t T + t0: (4.13)

Proof: Let us set 0"(t) = 0(t) + " exp (2L (t t0)) for small " > 0. Then

(7)

Now we use the one-sided Lipschitz condition (N 0") (t) (N 0) (t) L max t0 s t [ 0"(s) 0(s)] L" exp (2L (t t0)) to obtain 0 0"(t) = 00(t) + 2L" exp (2L (t t0)) (N 0)(t) + 2L" exp (2L (t t0)) (N 0")(t) L" exp (2L (t t0)) + 2L" exp (2L (t t0)) = (N 0")(t) + L" exp (2L (t t0)) > (N 0") (t) :

We will show that 0(t) < 0"(t) for t0 t t0+ T: If this is not true, because

of (4.13), there would exist a t1> t0 such that

0(t1) = 0"(t1) and 0(t) < 0"(t); t0 t < t1< T: (4.15)

Now we have

0

0"(t) > (N 0") (t) ; 0"(t0) x0 and 00(t) (N 0)(t); 0(t0) x0

for t0 t t0+ T:

And semi-nondecreasing nature of N and (4.15) give

(N 0)(t1) = (N 0")(t1) and (N 0)(t) (N 0")(t) ,t0 t < t1< T: (4.16)

Also, in view of (4.15), we get for small h > 0

0(t1 h) 0(t1) < 0"(t1 h) 0"(t1):

Hence the assumption (i), (4.11) and (4.16) show that

(N 0)(t1) 00(t1) 00"(t1) > (N 0")(t1) (N 0)(t1):

This leads to the contradiction because of (4.15). Then we have

0

0"(t) > (N 0") (t) ; 0"(t0) 0(t0) and 00(t) (N 0)(t); 0(t0) 0"(t0)

for t0 t t0+ T

and 0(t) < 0"(t) for t0 t t0+ T: We get 0(t) 0(t) as " approaches to

zero for t0 t t0+ T: This completes the proof.

The proof of Theorem 4.2 can also be done by using (4.10) and (4.11). Theorem 4.3: Assume that

(i) 0 2 C1[[t0; t0+ T ] ; E] ; t0; T > 0; 0 2 C1[[ 0; 0+ T ] ; E] ; 0 0 and

N 2 C [R+ E; E] ; 0

0(t) (N 0)(t); 0(t0) x0 for t0 t t0 + T and 0

(8)

(ii) (N x) (t) (N y) (t) L maxt0 s t[x(s) y(s)] whenever x(s) y(s) for

t0 s t and 0 < L < 1:

(iii) (N u)(t) is semi-nondecreasing in u for each t: (iv) t0< 0; (N u)(t) is nondecreasing in t for each u:

Then (I) 0(t) 0(t+ ) for t0 t t0+T where = 0 t0; (II) 0(t ) 0(t)

for 0 t 0+ T where = 0 t0:

Proof: Suppose that 0(t) = 0(t + ) so that 0(t0) = 0(t0+ ) = 0( 0)

x0 0(t0); and 0

0(t) = 00(t + ) (N 0)(t + ) = (N 0)(t) for t0 t t0+ T:

Let 0"(t) = 0(t) + " exp (2L (t t0)) for small " > 0. Then

0"(t) > 0(t) and 0"(t0) > 0(t) 0(t0): (4.17)

We will show that 0(t) < 0"(t) for t0 t t0+ T: If this is not true, because

of (I), there would exist a t1> t0such that

0(t1) = 0"(t1) and 0(t) < 0"(t); t0 t < t1< T: (4.18)

Now we use the one-sided Lipschitz condition

N 0" (t) N 0 (t) L max t0 s t 0" (s) 0(s) L" exp (2L (t t0)) to obtain 0 0"(t) = 0 0(t) + 2L" exp (2L (t t0)) (N 0)(t) + 2L" exp (2L (t t0)) (N 0")(t) + 2L" exp (2L (t t0)) N 0" (t) L" exp (2L (t t0)) + 2L" exp (2L (t t0)) > N 0" (t) : Now we have 0 0"(t) > N 0" (t) ; 0"(t0) x0 and 00(t) (N 0)(t); 0(t0) x0 for t0 t t0+ T:

(9)

(N 0)(t1) = (N 0")(t1) and (N 0)(t) (N 0")(t) ,t0 t < t1< T: (4.19)

Also, in view of (4.19), we get for small h > 0

0(t1 h) 0(t1) < 0"(t1 h) 0"(t1): (4.20)

Hence the assumption (i) and (4.20) show that (N 0)(t1) 00(t1)

0

0"(t1) > (N 0")(t1) (N 0)(t1):

Since t0 < 0; assumption (iv), (N u)(t) being nondecreasing in t; leads to a

con-tradiction because of (4.19).

By applying Theorem 4.1, we have

0

0"(t) > N 0" (t) ; 0"(t0) 0(t0) and 00(t) (N 0)(t); 0(t0) 0"(t0)

for t0 t t0+ T

and 0(t) < 0"(t) for t0 t t0+ T: We get 0(t) 0(t) as " approaches to

zero for t0 t t0+ T: This completes the proof.

To prove (II), we set 0(t) = 0(t ) for 0 t so that 0( 0) = 0( 0 ) = 0(t0) x0 0( 0), and

0

0(t) = 00(t + ) (N 0)(t ) = (N 0)(t) for 0 t 0+ T:

Setting 0"(t) = 0(t) " exp (2L (t t0)) for some " > 0 small. Then proceeding

similarly, we derive the estimate 0(t ) 0(t) for 0 t 0+ T where

= 0 t0: Therefore the proof is completed.

If we know the existence of lower and upper solutions of (2.1) such that 0(t) 0(t + ) ; t 2 J; then we can prove the existence of a solution of the initial value

problem (2.1) in the closed set '= fu 2 E : 0(t) u 0(t + ) ; t 2 Jg where

= 0 t0 :

Theorem 4.4: Assume that

(i) 0 2 C1[[t0; t0+ T ] ; E] ; t0; T > 0; 0 2 C1[[ 0; 0+ T ] ; E] ; 0 0 and

N 2 C [R+ E; E] ; 0

0(t) (N 0)(t); 0(t0) x0 for t0 t t0 + T and 0

0(t) (N 0)(t); x0 0( 0) for 0 t 0+ T ;

(ii) (N x) (t) (N y) (t) L maxt0 s t[x(s) y(s)] whenever x(s) y(s) for

t0 s t and 0 < L < 1;

(iii) (N u)(t) is semi-nondecreasing in u for each t; (iv) t0< 0; (N u)(t) is nondecreasing in t for each u;

(10)

(v) the operator N is bounded on ':

Then there exists a solution u(t) of (2.1) in the closed set ' with u(t0) = u0;

satisfying 0(t) u 0(t + ) for t0 t t0+ T:

Proof: Let P 2 C (J; R) be de…ned by

(P u) (t) = max [ 0(t) ; min [u (t) ; 0(t + )]] :

Then (N P u) (t) de…nes a continuous extension of N on E which is also bounded since N is assumed to be bounded on ': Therefore, there exists a solution of the initial value problem

u0(t) = (N P u) (t) ; u(t0) = u0

on J: For any " > 0; and for 0(t) = 0(t + ) consider

0"(t) = 0(t) + " (1 + t)

0"(t) = 0(t) + " (1 t) :

Then we have 0"(t0) < u0< 0"(t0); since 0(t0) u0 0(t0): We need to show

that

0"(t) < u(t) < 0"(t) ; on J: (4.21)

If this is not true, then there exists a t12 (t0; t0+ T ] at which u(t1) = 0"(t1)

and 0"(t) < u(t) < 0"(t); t0 t < t1: Then

u (t1) > 0(t1) and (P u)(t1) = 0(t1): (4.22) Moreover, 0(t1) (P u) (t1) 0(t1+ ) : Hence, 0 0(t1) (N P u) (t1) = u0(t1): Since 0 0"(t1) > 0 0(t1); we have 0 0"(t1) > u0(t1): However, we have 0 0"(t1) u0(t1) since u(t1) =

0"(t1) and u(t) < 0"(t); t0 t < t1: This is a contradiction

with

0

0"(t1) > u0(t1): Hence for all t 2 J; u(t) < 0"(t) and consequently (4.21)

holds on J; i:e: 0"(t) < u(t) < 0"(t) on J: We get

0(t) u(t) 0(t) on J as " ! 0: (4.23)

(11)

5. Main Results

In this section, we will prove the main theorem that gives several di¤erent condi-tions to apply the method of generalized quasilinearization to the nonlinear causal di¤erential equations [5] with initial time di¤erence and state remarks and corol-laries for special cases.

Theorem 5.1: Assume that

(i) N : E ! E is a continuous causal operator, E = C [J; R] for J = [t0; 0+ T ];

t0; 0; T 2 R+ and there exists a constant M such that (N u)(t) M on J ;

(ii) (N u)(t) is semi nondecreasing in u for each t 2 J;

(iii) 0 2 C1[[t0; t0+ T ] ; E] and 0 2 C1[[ 0; 0+ T ] ; E] for 0 t0 > 0 and

T > 0; 0 0(t) (N 0)(t) for t0 t t0+ T 0 0(t) (N 0)(t) for 0 t 0+ T where 0(t0) 0( 0);

(iv) t0< s0< 0; (N u)(t) is nondecreasing in t for each u;

0; 0 2 C1[J; R] such that 00(t) (N 0)(t)); (N 0)(t) 00(t) and 0(t) 0(t) , t 2 J;

(v) the Fréchet derivative; (Nxx)(t) exists and is continuous and (Nxx)(t) L1

for (t; x) 2 J ; for some L1> 0 and (N y)(t) (N x)(t) (Nxy)(t) (x y) where 0(t) y x 0(t), t 2 J;

(vi) (Nxx)(t) (Nxy)(t) L2(x y) ; t 2 J; where L2is a positive constant and

0 < 1:

Then there exist monotone sequences nsn(t)

o

and sn(t) which converge uni-formly to the unique solution of (2.1) with u(s0) = x0 where s0 is between initial

time t0 and 0 and the convergence is superlinear.

Proof: Since ~0(t) = 0(t + 1), 1 = 0 t0 we get ~0(t0) = 0( 0) 0(t0) = ~0(t0) and ~

0

0(t) (N ~0) (t + 1) for t0 t t0+ T: Using the

as-sumptions (iv); it is clear that N (t; x) satis…es the Lipschitz condition in x for (t; x) 2 J : Furthermore, we have the following inequalities

(N x)(t) (N y)(t) + (Nxy)(t) (x y) whenever ~0(t) y x ~0(t) on J (5.1)

and also by using (iv) we see that whenever ~0(t) y x ~0(t) ;

(N x)(t) (N y)(t) L (x y) (5.2)

(12)

Consider the linear IVPs of causal di¤erential equations for 2= s0 t0 s0 1= (N s 0) (t + 2) + (Nxs0)(t + 2) s 1 s0 ; s1(t0) = u0 (5.3) s0 1= (N s 0)(t + 2) + (Nxs0)(t + 2) s 1 s 0 ; s 1(t0) = u0 (5.4) where ~0(t0) u0 s

0(t0) : We shall show that ~0 ~1 on J: To do this, let

p = ~0(t) ~1(t); so that p(t0) 0: Then p0 = ~00 ~01 (N ~0) (t + 2) h (N ~0)(t + 2) + (Nx~0)(t + 2) s 1 ~0 i = (Nx~0)(t + 2)p:

Theorem 4.2 gives p(t) 0 on J proving that ~0(t) ~1(t) on J . Now set

p =s1 s

0 and note that p(t0) 0: Also, using (5.1)

p0 = s01 s00 (N ~0) (t + 2) + (Nx~0)(t + 2) s 1 ~0 (N s 0) (t + 1) (Ns0)(t + 2) (Nx~0)(t + 2) s 0 ~0 + (Nx~0)(t + 2) s 1 ~0 (Ns0) (t + 2) (Nx~0)(t + 2)p

which again impliess1(t) s

0(t) on J:

Similarly, we can obtain that ~0(t) ~1(t) ~0(t) on J: In order to prove that

~1(t) ~1(t) on J; we proceed as follows. Since ~0 ~1 ~0;using (5.1), we see

that

~1(t) = (N ~0) (t + 2) + (Nx~0)(t + 2) s

1 ~0 (N ~1)(t + 2):

Similarly, (Ns1) (t + 2) s01(t) and therefore by Theorem 4.2 it follows that ~1(t)

s0

1(t) on J which shows that

~0(t) ~1(t) ~1(t) ~0(t) on J:

Assume that for some n > 1, s0n (N ~n) (t + 2) ; (N s n)(t + 2) s0 n and s n(t) s n(t), t 2 J:

We must show that

(13)

where ~n+1(t) and ~n+1(t) are the solutions of linear IVPs of casual di¤erential equations as follows s0 n+1= (N ~n) (t + 2) + (Nx~n)(t + 2)( s n+1 ~n); ~n+1(t0) = u0 (5.6) s0 n+1= (N s n) (t + 2) + (Nx~n) (t + 2)( s n+1 s n); s n+1(t0) = u0: (5.7)

Hence setting p = ~n(t) ~n+1(t); it follows as before p0 (Nx~n)(t + 2)p on J

and hence ~n(t) ~n+1(t) ~n(t) on J: In a similar manner, we can prove that

~n(t) ~n+1(t) ~n(t) on J: Using (5.1), we obtain 0 n+1 = (N ~n) (t + 2) + (Nx~n) (t + 2) s n+1 ~n (N ~n+1) (t + 2) (Nx~n) (t + 2) s n+1 ~n + (Nx~n) (t + 2) s n+1 ~n = (N ~n+1) (t + 2) :

Similar arguments yield Nsn+1 (t + 2) s0n+1and hence Theorem 4.2 shows that ~n+1(t) ~n+1(t) on J which proves (5.5) is true. So by using induction we

obtain

~0 ~1 ~n ~n+1 ~n+1 ~n ~1 ~0 on J:

Now using standard arguments (Arzela-Ascoli and Dini’s Theorems, see[2]), it can be shown that the sequences f~n(t)g and f~n(t)g converge uniformly and

monotonically to the unique solution of u (t) of (2.1) on J:

s

u0(t) = (Nsu) (t + 2) ; su (t0) = u0 (5.8)

But letting s = t+ 2and changing the variable, we can show that (5.8) is equivalent to

u0(s) = (N u) (s) ; u (s0) = u0:

Finally, to prove superlinear convergence, we let

pn(t) =su (t) ~n(t) and qn(t) = ~n(t) s

u (t) : Note that pn(t0) = qn(t0) = 0:

(14)

p0n(t) = su0(t) ~0n(t) = N su (t + 2) h (N ~n 1) (t + 2) + (Nx~n 1) (t + 2) s n ~n 1 i = Z 1 0 Nx ssu + (1 s)~n 1 (t + 2) s u ~n 1 ds Nx(~n 1) (t + 2) s n ~n 1 = Z 1 0 Nx ssu + (1 s)~n 1 (t + 2) pn 1ds Nx(~n 1) (t + 2) (pn 1 pn) = Z 1 0 h Nx(ssu + (1 s)~n 1) (t + 2) (Nx~n 1) (t + 2) i pn 1ds +Nx~n 1) (t + 2) pn:

From (iv) and (v), it follows that

kp0n(t)k Z 1 0 L2 ssu + (1 s)~n 1 ~n 1) kpn 1k ds + L1kpnk Z 1 0 L2 ssu s~n 1 kpn 1k ds + L1kpnk Z 1 0 L2kspn 1k kpn 1k ds + L1kpnk = L2kpn 1k +1+ L1kpnk : Then setting an= kpnk ; we …nd a0n kp0nk L2(an 1) +1+ L1an:

Now Gronwall’s inequality implies

0 an(t) L2

Z t 0

exp [L1(t s)] (an(s)) +1ds on J

which yields the estimate max J kpn(t)k L2 exp (L1T ) L1 max J kpn 1(t)k +1 :

(15)

Similarly, q0n(t) = ~0n(t) su0(t) = N ~n 1 (t + 2) + (Nx~n 1) (t + 2) ~n ~n 1 N s u (t) (t + 2) = Z 1 0 Nx(s~n 1+ (1 s) s u (t)) (t + 2) ~n 1 su ds + (Nx~n 1) (t + 2)) ~n ~n 1 = Z 1 0 Nx(s~n 1+ (1 s) s u (t)) (t + 2) qn 1ds + (Nx~n 1) (t + 2) (qn qn 1) = Z 1 0 h Nx(s~n 1+ (1 s) s u (t)) (t + 2) Nxsu (t + 2) i qn 1ds + h Nxsu (t + 2) (Nx~n 1) (t + 2) i qn 1+ (Nx~n 1) (t + 2)qn:

We …nd, using (iv) and (v), that

kq0n(t)k Z 1 0 L2 s~n 1+ (1 s) s u su kqn 1k ds +L2 su ~n 1 kqn 1k + L1kqnk L2kqn 1k +1+ L2kpn 1k kqn 1k + L1kqnk :

Setting bn= kqnk and an 1= kpn 1k ; it easily follows that

b0n kq0nk L2(bn 1) +1+ L2(an 1) bn 1+ L1bn:

Similarly, an application of Gronwall’s inequality yields

0 kqnk L2 Z t 0 exp [L1(t s)] h kqn 1(s)k +1+ kpn 1(s)k kqn 1(s)k i ds on J; and hence max J kqn(t)k L2 exp (L1T ) L1 h max J kqn 1(t)k +1 + max J kpn 1(t)k kqn 1(t)k i : This completes the proof.

Next we give the following remarks and corollaries for special cases. Remark 5.1 Instead of assumption (v) in Theorem 5.1 if we assume that

k(Nxx) (t) (Nxy) (t)k L2kx yk ; t 2 J

(16)

Remark 5.2 Let the assumption of Remark 5.1 be valid. If we assume that (N x) (t) is uniformly convex in x instead of the assumption (iv) in Theorem 5.1, then by Lemma 4.5.1 in [2], we have quadratic convergence as well. Moreover, the quasimonotonicity of (N x) (t) in x implies by Lemma 4.2.5 in [2] that (Nx ) (t)x

is also quasimonotone in x:

Corollary 5.1: If the assumptions of the Theorem 5.1 hold with s0= t0, then

the conclusion of the theorem remains valid.

Proof: For the proof, we let ~0(t) = (t + 1), ~0(t) = (t) and ~u (t) = u (t)

and proceed, as we did in Theorem 4.1.

Corollary 5.2: If the assumptions of the Theorem 5.1 hold with s0= 0, then

the conclusion of the theorem remains valid.

Proof: Similarly, we let ~0(t) = (t 1) ; ~0(t) = (t) and ~u (t) = u (t) and

proceed, as we did in Theorem 5.1.

References

[1] Jankowski, T.: Quadratic Convergence of monotone iterations for di¤erential equations with initial time di¤erence. Dynamic Systems and Applications 14, 245-252 (2005).

[2] Köksal, S. and Yakar, C.: Generalized Quasilinearization Method with Initial Time Dif-ference, Simulation, an International Journal of Electrical, Electronic and other Physical Systems, 24 (5), (2002).

[3] Ladde, G.S, Lakshmikantham, V. and Vatsala, A.S.: Monotone Iterative Technique for Non-linear Di¤erential Equations, Boston. Pitman Publishing Inc. 1985.

[4] Lakshmikantham, V. and Leela, S.: Nonlinear Di¤erential Equations in Abstract Spaces, New York. Pergamon Press 1981.

[5] Lakshmikantham, V., Leela, S. Drici Z. and McRae F.A. : Theory of Causal Di¤erential Equations, Amsterdam,World Scienti…c 2009.

[6] Lakshmikantham, V., Leela, S. and Vasundhara Devi, J.: Another Approach to the Theory of Di¤erential Inequalities Relative to Changes in the Initial Times. Journal of Inequalities and Applications. 4, 163-174 (1999).

[7] Lakshmikantham, V. and Vatsala, A.S.: Di¤erential Inequalities with Initial Time Di¤erence and Applications. Journal of Inequalities and Applications. 3, 233-244 (1999).

[8] Lakshmikantham, V. and Vatsala, A.S.: Generalized Quasilinearization for Nonlinear Prob-lems, The Netherlands. Kluwer Academic Publisher 1998.

[9] Lakshmikantham, V. and Vatsala, A.S.: Theory of Di¤erential and Integral Inequalities with Initial Time Di¤erence and Applications. Analytic and Geometric Inequalities and Applica-tions Kluwer Academic Publishers, , 191-203 (1999).

[10] Yakar, C. and S.G., Deo.: Variation of Parameters Formulae with Initial Time Di¤erence for Linear Integrodi¤erential Equations. Journal of Applicable Analysis. Taylor & Francis.85, 333-343 (2006)

[11] Yakar, C. and Yakar, A.: An Extension of the Quasilinearization Method with Initial Time Di¤erence. Dynamics of Continuous, Discrete and Impulsive Systems. Series A: Mathematical Analysis. Waterloo, Watam Press.14 (S2) 275-279 (2007).

[12] Yakar, C. and Yakar, A: Further Generalization of Quasilinearization Method with Initial Time Di¤erence. Journal of Applied Functional Analysis. Eudoxus Press 4, 714-727. (2009). [13] Yakar, C. Initial Time Di¤erence Quasilinearization Method in Banach Space. Journal of Communications in Mathematics and Applications. Volume 2 (2011) Number2-3, pp.77-85.

(17)

Current address : Gebze Institute of Technology Faculty of Sciences, Department of Mathe-matics Applied MatheMathe-matics, Gebze-Kocaeli 141-41400, TURKEY

E-mail address : cyakar@gyte.edu.tr

Referanslar

Benzer Belgeler

After selecting the number of routers (r) for the topology, we determine how many extra links can be added to the net- work using the formula l = l max − l min. When the

Our finding is that the current monetary strategy followed by the CBRT that involves a heavy reliance on foreign capital inflows along with a relatively high real rate of interest,

Burada makro ve mikro olay ilişkisinde olduğu gibi politik bir- liğe sorumlu katılma sorununun belirdiği, -bütün olası kuşkularda- bilincin ve kolektif kimliğin ne ölçüde

Sonuçlara göre, doçentlik kariyer unvan boşluğu, akademisyenlerin çalışma odağını turizmin dışına itmekte, bunun bir yansıması olarak da üretilen akademik

Baseline scores on the QLQ-C30 functioning scales from patients in both treat- ment arms were comparable to available reference values for patients with ES-SCLC; however, baseline

212 Münevver Dikeç Sonuç olarak, kütüphanecilik ve bilgibilim alanında özellikle bilginin depolanması ve gelecek kuşaklara aktarılmasında CD-ROM yaygın olarak kullanım

Toplam 40 soru olduğundan yarışmacının, yanlış cevap verdiği soru sayısı en çok 8 dir..

Çalışmanın sonunda Sermaye Piyasa Kurulunun yayınlamış olduğu Kurumsal Yönetişim Đlkeleri baz alınarak Kocaeli ve çevresindeki Büyük Ölçekli Şirketlerin