Vol. 11. No.1. pp. 71-79 , 2010 Applied Mathematics
A Note on Bounds for Eigenvalues of Matrix Polynomials Mustafa Bah¸si
Çumra Imam Hatip Lisesi, 42500, Konya, Türkiye e-mail:m hvbahsi@ yaho o.com
Received Date: March 2, 2009 Accepted Date: January 26, 2010
Abstract. In this paper, we have proposed some bounds for the moduli of the eigenvalues of matrix polynomials. The proposed bounds involve norms of the coe¢ cient matrices and their inverses.
Key words: Polynomial eigenvalue problem; Matrix polynomial. 2000 Mathematics Subject Classi…cation: 15A18; 15A42; 65F15; 65F35. 1. Introduction
The problem of …nding the roots of a scalar polynomial is a bad-conditioned problem. As a consequence, computing the roots of a scalar polynomial of degree n (n 3) is very di¢ cult. In some problems, the knowledge of a location of roots may be bene…cal. There are many known bounds for the roots of scalar polynomials. Mohammad [7,8] obtained bounds for zeros of the scalar polynomial (1.1) p (z) = anzn+ an 1zn 1+ + a0 such as, jzj G 1 + 1 janj and jzj max 1; M janj
where G = max k=1:n 1jakj 1 k; M = max jzj=1 an 1z n 1+ + a 0 = max jzj=1 a0z n 1+ + a n 1 and an6= 0:
In [1], Joyal at al. obtained a bound for the zeros of the scalar polynomial (1.1) for an = 1 . This bound is the following form:
jzj 1
2 1 + jan 1j + q
(1 jan 1j)2+ 4 ; = max j=0:n 1jajj :
For other bounds see [1,4]. If we replace the scalar coe¢ cients of the polynomial (1.1) by matrices, then we get matrix polynomials.
Consider the matrices A0; A1; :::; Ak 2 Cnxn: A function
(1.2) P ( ) = kAk+ k 1Ak 1+ + A0
from C to Cnxnis called a matrix polynomial [5,6]. If Ai = Ai (i = 1; 2; :::; k) ;
P ( ) is said to be a self – adjoint matrix polynomial. If k = 1; then P ( ) = A1+ A0 is called matrix pencil [2].
In [9] , the spectrum of the matrix polynomial P ( ) in (1.2) is de…ned by (P ( )) = f 2 C; det (P ( )) = 0g :
The problem of …nding a scalar and a corresponding vector x 6= 0 satisfying P ( ) x = 0 is known as the polynomial eigenvalue problem. In this case, and x are called eigenvalue and eigenvector of P ( ), respectively. When Ak is
nonsingular, the degree of det (P ( )) is kn and P ( ) has kn …nite eigenvalues. When Ak is singular, the degree of det (P ( )) is r < kn and P ( ) has r …nite
eigenvalues, to which we add kn r in…nite eigenvalues. When A0 is singular,
zero is an eigenvalue of P ( ). Therefore we will assume that A0 and Ak are
nonsingular in this paper.
Consider the matrix polynomials, (1.3) P1( ) = Ak1P ( ) = kI + k 1A 1 k Ak 1+ + Ak1A1+ Ak1A0 and (1.4) P2( ) = A01 k P 1 = kI + k 1A01A1+ + A01Ak 1+A01Ak:
It is easy to verify that P ( ) and P1( ) have the same eigenvalues and the
2
in…nite eigenvalues, then these eigenvalues are correspond to the zero eigenvalues of P2( ).
Although many papers have been published on bounds for the zeros of scalar polynomials, only a few papers have been published on bounds for the eigen-values of matrix polynomials. Higham and Tisseur [3] obtained bounds for the eigenvalues of matrix polynomials by using block companion matrices , Ger-shgorin’s theorem, the numerical radius and associated scalar polynomials. In particular, every eigenvalue of P ( ) satis…es the following inequalities:
(1.5) 8 > > < > > : max A01Ak 1; 1 + max i=1:k 1 A 1 0 Ai 1 1 j j max Ak1A0 1; 1 + max i=1:k 1 A 1 k Ai 1 (1.6) 8 > < > : I + A01Ak; :::; A01A1 A01Ak; :::; A01A1 1 2 2 j j I + Ak1A0; :::; Ak1Ak 1 Ak1A0; :::; Ak1Ak 1 1 2 2 (1.7) 1 + A01 1 min i=1:kkAik 1 i j j 1 + A 1 k i=0:k 1max kAik 1 k i
for other bounds see[3].
In this paper, we gave new bounds for the moduli of the eigenvalues of a matrix polynomial. Throughout this paper A0and Akare nonsingular and P ( ), P1( )
and P2( ) are the matrix polynomials de…ned in (1.2), (1.3), (1.4) respectively.
Also, k:kdenotes a subordinate matrix norm. 2. The Bounds
We obtain the following generalization of a bound of Joyal at al [1] : Theorem 1. Every eigenvalue of P ( ) satis…es
(2.1) 8 > > < > > : 1 2 1 + A 1 0 A1 + q 1 A01A1 2 + 4 2 1 j j 1 2 1 + A 1 k Ak 1 + q 1 Ak1Ak 1 2 + 4 1 where 1= max i=0:k 1 A 1 k Ai and 2= max i=1:k A 1 0 Ai :
Proof: Since P ( ) and P1( ) have the same eigenvalues, it is su¢ cient to prove
Let j j > 1 2 1 + A 1 k Ak 1 + q 1 Ak1Ak 1 2 + 4 1 : Then j j > 1 and (j j 1) j j Ak1Ak 1 1> 0:
If the inequality above is multiplied by j jk 1 and divided by j j 1 , then we obtain j jk Ak1Ak 1 j jk 1 1j j k 1 j j 1 > 0: Therefore, (2.2) j jk Ak1Ak 1 j jk 1> 1j j k 1 j j 1 : Now, we can write
1j j k 1 j j 1 > 1j j k 1 j j 1 1 j j 1 = 1 j j k 1 1 j j 1 = 1 1 + j j + j j2+ + j jk 2 = 1+ 1j j + 1j j 2 + + 1j j k 2 (2.3) Ak1Ak 2 k 2+ Ak1Ak 3 k 3+ + Ak1A0 :
On the other hand, since
kI + A 1 k Ak 1 k 1 x kx A 1 k Ak 1 k 1x j jk Ak1Ak 1 k 1
for each x with kxk = 1 , (2.2) and (2.3) imply that
(2.4) k I + Ak1Ak 1 k 1 x j jk Ak1Ak 1 k 1 > 1j j k 1 j j 1 > Ak1Ak 2 k 2+Ak1Ak 3 k 3+ +Ak1A0 :
(2.5) kP1( ) xk = kI + k 1Ak1Ak 1+ + Ak1A0 x k I + Ak1Ak 1 k 1 x Ak1Ak 2 k 2+ + Ak1A0 x k I + Ak1Ak 1 k 1 x Ak1Ak 2 k 2+ + Ak1A0 :
From (2.4) and (2.5) we obtain kP1( ) xk > 0 , and it follows that is not
eigenvalue of P1( ) and P ( ). Therefore every eigenvalue of P1( ) and P ( )
must satisfy j j 12 1 + Ak1Ak 1 + q 1 Ak1Ak 1 2 + 4 1 : Similarly, for every eigenvalue of P2( ) we obtain
j j 12 1 + A01A1 +
q
1 A01A1 2
+ 4 2 :
Since the eigenvalues of P ( ) are reciprocals of the eigenvalues of P2( ) , every
eigenvalue of P ( ) satis…es 1 2 1 + A 1 0 A1 + q 1 A01A1 2 + 4 2 1 j j : Therefore the proof is completed.
Now, we consider the quadratic matrix polynomial (2.6) Q ( ) = 2I (U U + I) + U0U0
where U0 = Ak1A0; U = Ak1A0; Ak1A1; :::; Ak1Ak 1 : Higham and Tisseur
[3] gave an inequality between the eigenvalues of P ( ) and Q ( ). This inequal-ity says that every eigenvalue of P ( ) satis…es
(2.7) [ min(Q ( ))] 1 2 j j [ max(Q ( ))] 1 2:
Theorem 2. Every eigenvalue of P ( ) satis…es
(2.8) 8 > > > > > > > > > < > > > > > > > > > : 0 B B @ kUU + Ik + r kUU + Ik2+ 4 (U0U0) 1 1 2 1 C C A 1 2 j j 0 @kUU + Ik + q kUU + Ik2+ 4 kU0U0k 2 1 A 1 2
where U0= Ak1A0; U = Ak1A0; Ak1A1; :::; Ak1Ak 1 :
Proof: Let two scalar polynomials which associated with the matrix polynomial Q ( ) in (2.6) be de…ned as: u ( ) = 2 kUU + Ik kU0U0k and ` ( ) = 2+ kUU + Ik (U0U0) 1 1 :
By Descarte’s rule of signs, u ( ) and ` ( ) have unique positive real root [10]. From lemma 3.1 in [3], every eigenvalue of Q ( ) satis…es
r j j R
where R and r are the unique positive real roots of u ( ) and ` ( ), respectively. The roots R and r are calculated as:
(2.9) R = 0 @kUU + Ik + q kUU + Ik2+ 4 kU0U0k 2 1 A and (2.10) r = 0 B B @ kUU + Ik + r kUU + Ik2+ 4 (U0U0) 1 1 2 1 C C A :
From (2.7), (2.9) and (2.10), every eigenvalue of P ( ) satis…es 0 B B @ kUU + Ik + r kUU + Ik2+ 4 (U0U0) 1 1 2 1 C C A 1 2 j j 0 @kUU + Ik + q kUU + Ik2+ 4 kU0U0k 2 1 A 1 2 :
Therefore the theorem is proved.
Here, if we use the spectral norm, we obtain the bound of lemma 3.2. in [3]. Theorem 3. Every eigenvalue of P ( ) satis…es
(2.11) 8 > < > : 1 + (U0U0) 1 1 2 min kUU + Ik 12 ; 1 j j 2 max kUU + Ik ; kU0U0k 1 2 1 2
where U0= Ak A0; U = Ak A0; Ak A1; :::; Ak Ak 1 :
Proof: By repeating the proof of lemma 4.1 in [3] for the matrix polynomial Q ( ) in (2.6), for every eigenvalue of Q ( ) we can obtain a bound as:
(2.12) 8 < : 1 + (U0U0) 1 1 min kUU + Ik 1; 1 j j 2 max kUU + Ik ; kU0U0k 1 2 :
From (2.7) and (2.12), every eigenvalue of P ( ) satis…es
1 + (U0U0) 1 1 2 min kUU + Ik 12; 1 j j 2 max kUU + Ik ; kU0U0k 1 2 1 2 : Therefore the theorem is proved.
3. Numerical Examples
Example 1. Consider the matrix polynomial P ( ) = 3 1 3 5 7 + 2 2 4 6 8 + 1 2 3 4 + 3 4 5 6 :
Obtained values for upper and lower bounds are as in the following: Bound Lower bounds Upper bounds Norm
(1:5) 0,0833 3 1 –norm (1:6) 0,0709 3,2797 2 –norm (1:7) 0,0324 23,3932 2 –norm (2:1) 0,1606 2,9496 2 –norm (2:8) 0,0293 3,369 2 –norm (2:11) 0,0292 4,6382 2 –norm :
The minimal and maximal moduli of the 6 eigenvalues are min
i j ij = 0; 7071 ; maxi j ij = 1
Example 2.Consider the matrix polynomial
P ( ) = 4 2 6 6 4 1 0 1 2 3 4 2 0 5 2 5 1 4 0 3 0 3 7 7 5 + 3 2 6 6 4 2 7 5 2 3 5 6 4 2 4 3 0 0 1 1 6 3 7 7 5
+ 2 2 6 6 4 4 1 3 2 10 4 5 8 7 6 3 11 5 4 8 5 3 7 7 5 + 2 6 6 4 0 3 2 9 7 1 4 7 9 10 6 3 8 5 0 2 3 7 7 5 + 2 6 6 4 5 2 11 8 7 6 2 8 5 3 4 7 8 5 6 3 3 7 7 5 :
Obtained values for upper and lower bounds are as in the following:
Bound Lower bounds Upper bounds Norm
(1:5) 0,1768 9,4023 1 –norm (1:6) 0,1898 11,0622 2 –norm (1:7) 0,0423 22,0178 2 –norm (2:1) 0,2410 6,8622 2 –norm (2:8) 0,0615 11,0815 2 –norm (2:11) 0,0509 15,6443 2 –norm :
The minimal and maximal moduli of the 16 eigenvalues are min
i j ij = 0; 6127; maxi j ij = 3; 4489
4. Conclusion
In this work, we have proposed bounds for the moduli of the eigenvalues of matrix polynomials. The bounds have been obtained from norms of coe¢ cient matrices and their inverses. They also involve the inverse of the leading or trailing coe¢ cient matrix. We assumed that A0and Ak are nonsingular in this
paper.
Numerical examples have shown that sometimes our bounds are better than some of the bounds in [3].
References
1. P.G. Ciarlet, J.L. Lions (Eds.), 1994, Handbook of Numerical Analysis,volume III: Solution of Equations inRn (Part 2), Elsevier, Amsterdam, pp.625-778.
2. F.R. Gantmacher, 1960, The Theory of Matrices , Volume One, Chelsea Publishing Company, New York.
3. N.J. Higham, F. Tisseur, 2003, Bounds For Eigenvalues Of Matrix Polynomials, Linear Algebra and its Applications 358: 5-22.
xiii+561 pp.,ISBN 0-521-30586-1.
5. I. Gohberg, P. Lancaster, L. Rodman, Matrix Polynomials, 1982, Academic Press New York.
6. I. Krupnik, P. Lancaster, 1998, Linearizations, Realization, and Scalar Products for Regular Matrix Polynomials, Linear Algebra and its Applications 272 : 45 - 57. 7. Q.G. Mohammad, 1965, “On the Zeros of Polynomials” Amer. Math. Monthly. 72(1):35-38.
8. Q.G. Mohammad, 1965, “On the Zeros of Polynomials” Amer. Math. Monthly. 72(6):631-633.
9. F. Tisseur, K. Meerbergen, 2001, The quadratic eigenvalue problem, SIAM Rev. 43(2).