• Sonuç bulunamadı

Fast solutions of multiple monostatic radar cross section problems using the recompressed adaptive cross approximation algorithm

N/A
N/A
Protected

Academic year: 2021

Share "Fast solutions of multiple monostatic radar cross section problems using the recompressed adaptive cross approximation algorithm"

Copied!
76
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

FAST SOLUTIONS OF MULTIPLE

MONOSTATIC RADAR CROSS SECTION

PROBLEMS USING THE RECOMPRESSED

ADAPTIVE CROSS APPROXIMATION

ALGORITHM

a thesis

submitted to the department of electrical and

electronics engineering

and the graduate school of engineering and science

of bilkent university

in partial fulfillment of the requirements

for the degree of

master of science

By

SeyedMahdi Kazempourradi

January, 2014

(2)

I certify that I have read this thesis and that in my opinion it is fully adequate, in scope and in quality, as a thesis for the degree of Master of Science.

Prof. Dr. Levent G¨urel (Advisor)

I certify that I have read this thesis and that in my opinion it is fully adequate, in scope and in quality, as a thesis for the degree of Master of Science.

Prof. Dr. Tuˇgrul Dayar

I certify that I have read this thesis and that in my opinion it is fully adequate, in scope and in quality, as a thesis for the degree of Master of Science.

Assoc. Prof. Dr. Sinan Gezici

Approved for the Graduate School of Engineering and Science:

Prof. Dr. Levent Onural Director of the Graduate School

(3)

ABSTRACT

FAST SOLUTIONS OF MULTIPLE MONOSTATIC

RADAR CROSS SECTION PROBLEMS USING THE

RECOMPRESSED ADAPTIVE CROSS

APPROXIMATION ALGORITHM

SeyedMahdi Kazempourradi

M.S. in Electrical and Electronics Engineering Supervisor: Prof. Dr. Levent G¨urel

January, 2014

We developed a method that incorporates an algebraic compression technique to accelerate the computation of multiple monostatic radar cross sections (RCSs) of arbitrary 3-D geometries. Since most radars rely on the backscattering from a target, computing the monostatic RCS (MRCS) is needed more often than the bistatic RCS. Computation of each MRCS value requires a separate solution, which may be costly depending on the size of the problem. The task becomes considerably harder when the goal is to compute multiple MRCS values with high angular resolution. The proposed technique compresses the excitation ma-trix using the adaptive cross approximation (ACA) algorithm in the first step. A recompression is applied on the matrices obtained from ACA by utilizing the QR decomposition and computing the singular value decomposition in an effi-cient manner. The solution of each excitation is accelerated by the multilevel fast multipole algorithm. The numerical results demonstrate the efficiency and accuracy of our proposed method.

Keywords: Adaptive cross approximation (ACA), multilevel fast multipole algo-rithm (MLFMA), radar cross section (RCS), Recompression.

(4)

¨

OZET

MONOSTAT˙IK RADAR KES˙IT ALAN

PROBLEMLER˙IN˙IN C

¸ ¨

OZ ¨

UMLER˙IN˙IN TEKRAR

SIKIS

¸TIRILMIS

¸ YAKLAS

¸IK ADAPT˙IF C

¸ APRAZ

ALGOR˙ITMASIYLA HIZLANDIRILMASI

SeyedMahdi Kazempourradi

Elektrik ve Elektronik M¨uhendisli˘gi, Y¨uksek Lisans Tez Y¨oneticisi: Prof. Dr. Levent G¨urel

Ocak, 2014

Radar kesit alan (RKA) hesaplaması stratejik ¨oneme sahiptir. RKA hesapla-maları i¸cin bir y¨ontem, sayısal elektromanyetik ¸c¨oz¨um tekniklerinin kul-lanılmasıdır. Ancak, ger¸cek ya¸samda ¨onem ta¸sıyan geometrilerin modellenmesi, kar¸sımıza ¸cok b¨uy¨uk sayısal problemler ¸cıkartmaktadır. B¨oyle b¨uy¨uk sayısal prob-lemleri, matris denklemine ¸cevirip hızlı ¸c¨oz¨um y¨ontemleriyle ¸c¨ozebiliriz. Alıcı ve vericinin yan yana konumlandırıldı¸gı, hatta bir ¸cok zaman aynı antenin kul-lanıldı¸gı radar tiplerinde, monostatik RKA (MRKA) ¸cok ¨onemli sayılır. Tek bir MRKA hesaplanması i¸cin bir sayısal ¸c¨oz¨um yapılması gerekir ve bu ¸c¨oz¨um pahalı olabilir. De˘gi¸sik ve y¨uksek ¸c¨oz¨un¨url¨ukl¨u k¨uresel a¸cılarda ise MRKA hesaplaması ¸cok daha pahalı olabilir. Bu ¸calı¸smada birden fazla MRKA hesaplanmasının hızlandırılması amacıyla bir sıkı¸stırma algoritması geli¸stirilmi¸stir. Bu y¨ontemde, ilk seviyede yakla¸sık adaptif ¸capraz algoritması (YAC¸ A) kullanılır. Bir son-raki adımda YAC¸ A’dan ¸cıkan matrisler alınır ve QR algoritmasını uygulanarak az i¸slemle tekil de˘ger ayrı¸sımı hesaplanır. Belli bir seviyeden d¨u¸s¨uk olan tekil de˘gerler atıldı˘gında, daha d¨u¸s¨uk bir etkin rank elde edilir. Bu tezde sıkı¸stırma algortimasının geli¸stirilmesi ve farklı problemlere uygulanması anlatılacaktır.

Anahtar s¨ozc¨ukler : Yakla¸sık Adaptif C¸ apraz Algoritması (YAC¸ A), Radar Kesit Alanı (RKA), Tekrar Sıkı¸stırma.

(5)

Acknowledgement

I would like to express my sincere gratitude to Prof. Dr. Levent G¨urel for his supervision, guidance and the suggestions throughout the development of this thesis. It was an honor for me to work with him and benefit from his deep knowledge and experience in electromagnetics.

I also wish to thank Prof. Dr. Tuˇgrul Dayar and Assoc. Prof. Dr. Sinan Gezici for evaluating and commenting on my thesis as a jury member.

I also like to thank the members of the Bilkent university computational electro-magnetics research center (BiLCEM) for their support, collaboration, friendship, and invaluable suggestions in the preparation of this thesis.

(6)
(7)

Contents

1 Introduction 1

1.1 Motivation and Related Works . . . 1

1.2 Notation . . . 2

2 Background 3 2.1 Surface Integral Equations . . . 3

2.2 T-EFIE Formulation . . . 4

2.2.1 N-MFIE Formulation . . . 4

2.2.2 T-N-CFIE Formulation . . . 5

2.3 Discretization of Surface Integral Equations . . . 5

2.3.1 Method of Moments . . . 6

2.3.2 Discretization of T-EFIE . . . 7

2.3.3 Discretization of N-MFIE . . . 8

2.3.4 Discretization of T-N-CFIE . . . 8

(8)

CONTENTS viii

2.3.6 RWG Functions . . . 9

2.4 Iterative Solvers . . . 10

2.5 Multilevel Fast Multipole Algorithm . . . 11

2.6 Surface Operators and Secondary Fields . . . 13

2.7 Radar Cross Section . . . 14

2.8 Angular Resolution . . . 14

3 Implementation 16 3.1 Adaptive Cross Approximation . . . 16

3.2 QR Decomposition . . . 19

3.3 Singular Value Decomposition . . . 20

3.4 Recompression Technique . . . 21

3.5 Application in Multiple Right-Hand Side Problems . . . 23

3.6 Review of Other Methods . . . 26

3.6.1 Algebraic Methods . . . 26

3.6.2 Rational Function Approximation . . . 29

4 Numerical Results 32 4.0.3 NASA Almond . . . 32

4.1 Flamme . . . 40

(9)

CONTENTS ix

(10)

List of Figures

2.1 The RWG function defined on triangular domains. . . 10

3.1 A schematic view of the ACA algorithm. . . 17

4.1 NASA almond geometry. The electrical size of the largest dimen-sion of the geometry is 5.6λ at 1 GHz. . . 33 4.2 φφ-polarized 2-D MRCS of the NASA almond. 2-D MRCS (a)

ob-tained from the BF runs and (b) computed using the RACA. (c) The percentage of the relative error. . . 34 4.3 φφ-polarized 2-D MRCS values of the NASA almond obtained

us-ing different interpolation methods. The plots have a resolution of 1◦ and the sampling interval is 2◦. (a) 2-D MRCS values ob-tained from the CS interpolation method. (b) The relative error in percents for the CS interpolation method. (c) 2-D MRCS values resulting from linear interpolation method. (d) The relative er-ror in percents corresponding to the linear interpolation method. (e) 2-D MRCS values computed using the nearest-neighbor in-terpolation technique. (f) The relative error in percents for the nearest-neighbor interpolation method. . . 36

(11)

LIST OF FIGURES xi

4.4 φφ-polarized 2-D MRCS values of the NASA almond obtained us-ing different interpolation methods. The plots have a resolution of 1◦ and the sampling interval is 3◦. (a) 2-D MRCS values ob-tained from the CS interpolation method. (b) The relative error in percents for the CS interpolation method. (c) 2-D MRCS values resulting from linear interpolation method. (d) The relative er-ror in percents corresponding to the linear interpolation method. (e) 2-D MRCS values computed using the nearest-neighbor in-terpolation technique. (f) The relative error in percents for the nearest-neighbor interpolation method. . . 37 4.5 φφ-polarized 2-D MRCS values of the NASA almond obtained

us-ing different interpolation methods. The plots have a resolution of 1◦ and the sampling interval is 5◦. 2-D MRCS values obtained from (a) the CS interpolation method, (c) the linear interpolation method, (e) the nearest-neighbor interpolation technique and cor-responding relative percentage of errors (b), (d) and (f), respectively. 38 4.6 φφ-polarized 2-D MRCS values of the NASA almond obtained

us-ing different interpolation methods. The plots have a resolution of 1◦ and the sampling interval is 10◦. 2-D MRCS values obtained from (a) the CS interpolation method and (b) corresponding rela-tive percentage of error, (c) the linear interpolation method and (d) corresponding relative percentage of error, (e) the nearest-neighbor interpolation technique and (f) corresponding relative percentage of error. . . 39 4.7 The Flamme geometry. The electrical size of the largest dimension

of the geometry is 8λ at 4 GHz. . . 40 4.8 θθ-polarized 2-D MRCS values of the Flamme. (a) 2-D MRCS

obtained from the BF runs. (b) 2-D MRCS computed using the RACA. (c) The percentage of the relative error. . . 41

(12)

LIST OF FIGURES xii

4.9 θθ-polarized 2-D MRCS values of the Flamme obtained using dif-ferent interpolation techniques. The plots have a resolution of 0.5◦ and the sampling interval is 1◦. (a) 2-D MRCS values ob-tained from the CS interpolation method. (b) The relative error in percents for the CS interpolation method. (c) 2-D MRCS values resulting from linear interpolation method. (d) The relative er-ror in percents corresponding to the linear interpolation method. (e) 2-D MRCS values computed using the nearest-neighbor in-terpolation technique. (f) The relative error in percents for the nearest-neighbor interpolation method. . . 43 4.10 θθ-polarized 2-D MRCS values of the Flamme obtained using

dif-ferent interpolation techniques. The plots have a resolution of 0.5◦ and the sampling interval is 2◦. (a) 2-D MRCS values ob-tained from the CS interpolation method. (b) The relative error in percents for the CS interpolation method. (c) 2-D MRCS values resulting from linear interpolation method. (d) The relative er-ror in percents corresponding to the linear interpolation method. (e) 2-D MRCS values computed using the nearest-neighbor in-terpolation technique. (f) The relative error in percents for the nearest-neighbor interpolation method. . . 44 4.11 θθ-polarized 2-D MRCS values of the Flamme obtained using

dif-ferent interpolation techniques. The plots have a resolution of 0.5◦ and the sampling interval is 2.5◦. 2-D MRCS values obtained from (a) the CS interpolation method, (c) the linear interpolation method, (e) the nearest-neighbor interpolation technique and cor-responding relative percentage of errors (b), (d) and (f), respectively. 45

(13)

LIST OF FIGURES xiii

4.12 θθ-polarized 2-D MRCS values of the Flamme obtained using dif-ferent interpolation techniques. The plots have a resolution of 0.5◦ and the sampling interval is 5◦. 2-D MRCS values obtained from (a) the CS interpolation method and (b) corresponding relative percentage of error, (c) the linear interpolation method and (d) corresponding relative percentage of error, (e) the nearest-neighbor interpolation technique and (f) corresponding relative percentage of error. . . 46 4.13 Generic helicopter geometry. The electrical size of the largest

di-mension of the geometry is 3.4λ at 125 MHz. . . 47 4.14 φφ-polarized MRCS values of the generic helicopter in the angular

sector defined 0 < φ < π and θ = π/2 with an angular resolu-tion of 1◦. The solid red curve indicates the MRCS values that are computed via BF runs. The dashed blue curve represents the MRCS values obtained from RACA. The green curve illustrates the percentage of the relative error. . . 48 4.15 φφ-polarized MRCS values of the generic helicopter in the

angu-lar sector defined by 0 < φ < π and θ = π/2 with an anguangu-lar resolution of 1◦. The solid blue curve represents the MRCS val-ues obtained from BF runs. The solid red curve indicates the MRCS values that are computed via RACA. The cyan, green, and magenta curves illustrate the MRCS values achieved by the cubic-spline interpolation technique and correspond to 2◦, 5◦, and 10◦ sampling intervals, respectively. The relative errors are provided in the second plot with the same legend. . . 50

(14)

LIST OF FIGURES xiv

4.16 φφ-polarized MRCS values of the generic helicopter in the angular sector defined by 0 < φ < π and θ = π/2 with an angular res-olution of 1◦. The solid blue curve represents the MRCS values obtained from BF runs. The solid red curve indicates the MRCS values that are computed via RACA. The cyan, green, and ma-genta curves illustrate the MRCS values achieved by the linear interpolation technique and correspond to 2◦, 5◦, and 10◦ sam-pling intervals, respectively. The relative errors are provided in the second plot with the same legend. . . 51 4.17 φφ-polarized MRCS values of the generic helicopter in the angular

sector defined by 0 < φ < π and θ = π/2 with an angular res-olution of 1◦. The solid blue curve represents the MRCS values obtained from BF runs. The solid red curve indicates the MRCS values that are computed via RACA. The cyan, green, and ma-genta curves illustrate the MRCS values achieved by the spline interpolation technique and correspond to 2◦, 5◦, and 10◦ sam-pling intervals, respectively. The relative errors are provided in the second plot with the same legend. . . 52 4.18 φφ-polarized MRCS values of the generic helicopter in the angular

sector defined by 0 < φ < π and θ = π/2 with an angular res-olution of 1◦. The solid blue curve represents the MRCS values obtained from BF runs. The solid red curve indicates the MRCS values that are computed via RACA. The cyan, green, and ma-genta curves illustrate the MRCS values achieved by the nearest neighbor interpolation technique and correspond to 2◦, 5◦, and 10◦ sampling intervals, respectively. The relative errors are provided in the second plot with the same legend. . . 53

(15)

LIST OF FIGURES xv

4.19 φφ-polarized MRCS values of the generic helicopter in the angular sector defined by 0 < φ < π and θ = π/2 with an angular res-olution of 1◦. The solid blue curve represents the MRCS values obtained from BF runs. The solid red curve indicates the MRCS values that are computed via RACA. The cyan, green, and ma-genta curves illustrate the MRCS values achieved by the piecewise cubic hermite interpolation technique and correspond to 2◦, 5◦, and 10◦ sampling intervals, respectively. The relative errors are provided in the second plot with the same legend. . . 54

(16)

List of Tables

4.1 Statistics of the NASA Almond Simulations . . . 35 4.2 Statistics of the Flamme Simulations . . . 42 4.3 Statistics of the Helicopter Simulations . . . 48

(17)

Chapter 1

Introduction

1.1

Motivation and Related Works

There are many circumstances where the knowledge of the radar cross section (RCS) is crucially important [1]. This information can be used in many ways, such as designing stealth targets [2] or in determining suitable electronic counter measures against a certain target [3].

In some cases, direct measurement of RCSs of a target may not be possible or practical, leaving computation as the viable option. Consequently, accurate and fast RCS simulations are important for predicting actual RCS values [4]. The monostatic RCS (MRCS) values, where the incidence and observation directions are the same, are used more often than the bistatic RCS because most radars depend on the back-scattering from a target. Each MRCS computation requires a separate solution that can be costly depending on the size of the problem. Fast solvers can be utilized to accelerate the solution of large electromagnetic scattering problems by reducing the solution time and the memory consumption. Nevertheless, computing MRCS values for problems that are excited with various incident plane waves from different angles become time-consuming. The task becomes considerably harder when the goal is to compute multiple MRCS values with high angular resolution.

(18)

Several methods are developed and used to speed up the computation of RCS problems with multiple right-hand sides (RHSs). By exploiting an adaptive cubic-spline interpolation [5], the desired MRCS values for a given resolution can be computed. Removing redundancies in the excitation matrix by using QR decom-position [6], singular value decomdecom-position (SVD) [7], interpolative decomdecom-position (ID) [8], and adaptive cross approximation (ACA) [13] leads to fast RCS com-putations. An approach to the solution of RCS problems over a wide range of incident plane waves is the rational function approximation technique. There are three main approaches for creating approximated rational functions: asymptotic waveform evaluation (AWE) [9, 10], the Pad´e approximation, and model-based parameter estimation (MBPE) [11, 12]. One advantage of ACA is that it requires only partial knowledge of the matrix elements. It also has a low computational cost [14]. However, for a given threshold, the most efficient low-rank approx-imation a matrix can be obtained by SVD [15]. In this paper, we propose a compression technique that incorporates SVD on the low-rank matrices obtained from ACA to reach a better low-rank decomposition of the RHS matrix. In other words, the main bottleneck of SVD, i.e., its high computational complexity [13], is solved by applying a recompressed ACA (RACA) algorithm. Due to the al-gebraic nature of the proposed method, it can be used in combination with any fast solver [16, 17, 18, 19] without any major change in the existing code. In this work, the multilevel fast multipole algorithm (MLFMA) is used to accelerate the solution of each excitation [16].

1.2

Notation

In this thesis, Greek or Roman letters with an italic font are used for scalars. Bold-face, italicized, capital letters with an overbar, such as A, represent matrices. Vectors are denoted by bold-face, italicized, small letters without a bar, such as a. For unit vectors, we use a hat over the vector, such as ˆn. Bold-face, italicized, capital letters with a tilde, such as ˜A, denote approximate matrices. A dot product is used for matrix-matrix and matrix-vector multiplications (MVM).

(19)

Chapter 2

Background

2.1

Surface Integral Equations

Surface integral equations (SIEs) are widely used for the solutions of scattering and radiation problems in electromagnetics [20]. Equivalent electric and/or mag-netic currents are defined on the surface of a three-dimensional (3-D) complicated structure. The problem can be formulated by applying boundary conditions on the surface of the structure by using the equivalent currents. There are two major testing schemes used in SIEs, namely tangential (T) and normal (N) equa-tions [21]. In tangential equaequa-tions, boundary condiequa-tions are tested directly by sampling the tangential components of fields on the surface. However, in normal equations, fields are tested after a projection on the surface via cross-product operations with the outward normal vector. Applying boundary conditions on the electric field and the magnetic field leads to the electric-field integral equation (EFIE) and the magnetic-field integral equation (MFIE), respectively.

For a perfect electric conductor (PEC) object, any of the T-EFIE, T-MFIE, N-EFIE, and N-MFIE formulations can be used to obtain the induced currents on the surface of the object [22]. The most common formulations in the literature are T-EFIE and N-MFIE, but for closed geometries those formulations suffer from internal resonances. A solution to this issue is to use the combined-field integral

(20)

equation (CFIE), which is an appropriate convex combination of T-EFIE and N-MFIE [23].

2.2

T-EFIE Formulation

The total tangential electric field vanishes on a conducting surface, and thus, based on this physical boundary condition, we can declare T-EFIE as

ˆ t · Z S0 dr0G(r, r0) · J (r0) = i kηˆt · E inc(r) r ∈ S0 , (2.1)

where Einc(r) represents the incident electric field and S0 is the surface of the geometry. The intrinsic impedance of the medium is η = pµ/ and k is the wavenumber (k = 2π/λ, where λ is the wavelength). In (2.1), ˆt is any tangential unit vector on S0 and J (r0) is the unknown induced current residing on the surface. The dyadic Green’s function is defined as

G(r, r0) =  I + ∇∇ k2  g(r, r0), (2.2) where g(r, r0) = e ik|r−r0| 4π|r − r0| (2.3)

is the scalar Green’s function for the 3-D scalar Helmholtz equation. This function represents the response at r due to a point source located at r0. If the incident electric field changes, only the RHS of the equation in (2.1) will alter. This point becomes important when we are dealing with problems that are excited with multiple sources.

2.2.1

N-MFIE Formulation

The boundary condition for the tangential magnetic field states that the tangen-tial components of the magnetic field across an interface, along which there exists a surface electric current density J , are discontinuous by an amount equal to the electric current density. The N-MFIE formulation can be derived by testing the

(21)

magnetic field after a projection on the surface via cross-product operations with the outward normal vector and imposing the physical boundary condition. The expression for N-MFIE can be written as

J (r) − ˆn × Z S0 dr0J (r0) × ∇0g(r, r0) = ˆn × Hinc(r), (2.4) or alternatively: Ωo 4πJ (r) − ˆn × Z S0(PV) dr0J (r0) × ∇0g(r, r0) = ˆn × Hinc(r), (2.5)

where ˆn is any outward unit normal on S0 and Hinc(r) is the incident magnetic field. In (2.5), Ωo is the solid angle and the integral on the left-hand side is the

principal value. Unlike T-EFIE, which is valid for open and closed geometries, N-MFIE is acceptable only for closed surfaces.

2.2.2

T-N-CFIE Formulation

Unlike T-EFIE, N-MFIE is a second-kind integral equation that leads to well-conditioned matrices after discretization. However, because of the singularity of the N-MFIE kernel and the identity term associated with J term in (2.4), the accuracy of N-MFIE is lower than T-EFIE. T-N-CFIE is a more-accurate integral-equation formulation than N-MFIE [24]. It can be obtained by an appropriate linear combination of T-EFIE and N-MFIE formulations, i.e.,

T − N − CFIE = αT − EFIE + (1 − α)N − MFIE, (2.6) where α is a parameter between 0 and 1. It is shown that 0.2 ≤ α ≤ 0.3 yields minimum iteration counts [25]. Along with the noted advantages, one of the most important features of T-N-CFIE is that it is free of internal resonances. T-N-CFIE contains N-MFIE and hence cannot be applied to open geometries.

2.3

Discretization of Surface Integral Equations

Despite the fact that electromagnetics problems can be formulated rigorously with Maxwell’s equations, SIEs can only be solved analytically for a few canonical

(22)

objects, such as spheres. To numerically solve problems involving complicated structures, the geometry and the SIEs must be discretized simultaneously. In this way, equivalent surface currents are expanded in a series of basis functions. A dense N × N matrix equation can be obtained using the method of moments (MOM) and testing the boundary conditions for the electric and the magnetic fields on the surface of the geometry, N being the number of unknown coefficients of the basis functions. Each element of the matrix obtained by discretizing the SIEs presents an electromagnetics interaction between a pair of discretized current elements, i.e., basis and testing functions.

2.3.1

Method of Moments

We can convert the SIEs described in Section 2.1 to a matrix equation by utilizing MOM. The equivalent surface currents are expanded as a summation of basis functions. The coefficients of these functions are calculated by solving the matrix equation obtained by MOM. The linear operator L is applied on the equivalent surface current and the integral equation can be shown as

L{J (r)} = g(r), (2.7)

where g is one of the RHS vectors of T-EFIE and/or N-MFIE. Considering J is unknown, expanding J in a series of known basis functions and unknown coefficients, we obtain, J (r) ≈ N X n=1 anbn(r). (2.8)

Projecting (2.8) onto the N -dimensional space span{t1, t2, · · · , tN} yields

htm, L{J (r)}i = htm, g(r)i, m = 1, 2, · · · , N, (2.9)

where

hf , gi = Z

drf (r) · g(r) (2.10)

denotes the inner product of the two vector functions, f and g. If the testing in (2.10) has been done using the same basis functions as in (2.8) (Galerkin scheme), (2.9) will represent a matrix equation.

(23)

In other words, (2.9) can be written as Z drtm(r) · N X n=1 anL{bn(r)} = Z drtm(r) · g(r). (2.11)

Changing the order of summation and integration, the equation becomes

N X n=1 an Z drtm(r) · L{bn(r)} = Z drtm(r) · g(r), (2.12)

which yields an N × N matrix equation. Hence, the coefficient vector a becomes the solution of Z(N ×N )· a(N ×1) = v(N ×1), (2.13) where Zmn = htm, L{bn}i, m, n = 1, 2, · · · , N, (2.14) and vm = htm, gi. (2.15)

The interpretation of each matrix element, i.e., Zmn, is the electromagnetic

in-teraction between the mth testing and the nth basis function.

2.3.2

Discretization of T-EFIE

By substituting L with the expression that we derived for T-EFIE, the matrix elements can be obtained as

ZT-EFIEmn = Z Sm drtm(r) · Z Sn dr0G(r, r0) · bn(r0), (2.16)

where tm is a testing function and bnis a basis function. The expression in (2.16)

can be expanded as [26] ZT-EFIEmn = Z Sm drtm(r) · Z Sn dr0bn(r0)g(r, r0) − i k2 Z Sm drtm(r) · Z Sn dr0bn(r0) · [∇∇0g(r, r0)]. (2.17)

(24)

2.3.3

Discretization of N-MFIE

Using MOM for this discretization, the matrix elements can be achieved via ZN-MFIEmn = Z Sm drtm(r) · bn(r) − Z Sm drtm(r) · ˆn × Z Sn dr0bn(r0) × ∇0g(r, r0). (2.18)

After performing the singularity extraction technique for the outer integral, (2.18) becomes [27] ZN-MFIEmn = −1 2 Z Sm drtm(r) · bn(r) − Z Sm,PV drtm(r) · ˆn × Z Sn dr0bn(r0) × ∇0g(r, r0), (2.19)

where PV stands for the principal value of the integral.

2.3.4

Discretization of T-N-CFIE

T-N-CFIE is the linear combination of T-EFIE and N-MFIE. Based on this fact, both formulations must be used to form T-N-CFIE. The elements of T-N-CFIE can be derived as

ZT-N-CFIEmn = (α)ZT-EFIEmn + (1 − α)ZN-MFIEmn . (2.20)

2.3.5

Computation of the RHS Vectors

Each element of the RHS vector for T-EFIE can be calculated by testing the incident electric field as

vT-EFIEm = −i kη

Z

Sm

drtm(r) · Einc(r). (2.21)

Similarly, the RHS vector for N-MFIE can be obtained using vN-MFIEm = −

Z

Sm

(25)

Finally, the linear combination of (2.21) and (2.22) yields the RHS vector for T-N-CFIE, i.e.,

vT-N-CFIEm = α(vT-EFIEm ) + (1 − α)(vN-MFIEm ). (2.23) In the multiple excitation case, the RHS vector will become a matrix and each column of the matrix can be calculated using (2.23).

2.3.6

RWG Functions

The expansion of the unknown surface current density can be performed using the oriented Rao-Wilton-Glisson (RWG) [26] functions on small planar triangles. It is important to mention that although triangles are planar, two triangles are not necessarily co-planar. To implement a Galerkin approach for discretization, we use RWG functions for basis and testing functions. Surfaces of geometries are meshed with planar triangles in accordance with the RWG functions. Each basis function is associated with an edge, hence the number of unknowns, N , is equal to the total number of edges in the triangulation. For high accuracy, triangle sizes must be small, but this leads to problems with large numbers of unknowns. As a rule of thumb, we choose the average size of the mesh to be about one-tenth of the wavelength (λ/10).

The RWG set of basis and testing functions exhibits constant normal and linear tangential variations along the edge. The normal (perpendicular) com-ponent of the RWG function on the edge is maximum and equal to unity. The normal component of the RWG basis function is continuous on the common edge. These properties will lead to a piecewise constant expansion of the surface current density [26].

The triangular basis functions are defined as

bn(r) =              ln 2A+ n (r − r+n), r ∈ Sn+ ln 2A− n (r−n − r), r ∈ Sn− 0, otherwise. (2.24)

(26)

Equation (2.24) can be rewritten as bik(r) = ±

lik

2Ai

(r − rik)δi(r). (2.25)

In the equation above, δi(r) is defined as

δi(r) =    1, r ∈ Ai 0, otherwise. (2.26) An important property of RWG functions is that their divergence is finite every-where: ∇ · bn(r) =              ln A+ n , r ∈ Sn+ − ln A− n , r ∈ Sn− 0, otherwise. (2.27)

Figure 2.1: The RWG function defined on triangular domains.

2.4

Iterative Solvers

In this section, we give a brief introduction to iterative solvers. Direct solution of a dense N × N matrix equations using conventional methods, such as Gaussian

(27)

elimination, requires O(N3) operations and is very costly [15]. The most

impor-tant reason for using iterative solutions is to reduce the computational complexity of the matrix-equation solution. In the ideal case, we would like to have a com-plexity of O(N ) or O(N log N ) both in memory and execution time.

The iterative solver, requires at least one MVM at each iteration. Hence, reducing the complexity of the MVM will lead to a low-complexity solution. The iterations continue until the approximate solution, which is obtained in each iteration, becomes accurate enough to satisfy the practical criteria.

One of the methods that can reduce the complexity of MVM is the multilevel fast multipole algorithm (MLFMA) [16]. This algorithm has both the memory and the CPU time complexities of O(N log N ).

2.5

Multilevel Fast Multipole Algorithm

Discretization of an integral-equation formulation with MOM leads to a dense N × N matrix equation. The matrix equation can be solved using direct or iterative solvers. As stated in the previous section, ordinary direct solution methods are too expensive due to their high computational complexity of O(N3). On the other hand, typical iterative solvers have the complexity of O(N2) for memory requirements and execution time, because they require MVM at each iteration. Many methods have been proposed to accelerate the solutions of matrix equations. Most of these so-called “fast” methods take advantage of redundancies in the MOM impedance matrix.

One of the most powerful methods is the MLFMA. Due to its reduced com-putational complexity of O(N log N ), this method has been widely used in recent years to solve extremely large electromagnetics problems. In MLFMA, an oct-tree structure of L levels is constructed by placing the object in a cubic box and divid-ing the geometry into clusters (sub-boxes), recursively. The recursion continues until the the minimum size of the lowest-level clusters becomes approximately

(28)

one-fourth of a wavelength (λ/4). Then, MLFMA computes the interaction be-tween testing and basis functions in a group-by-group manner. The required MVM is decomposed into two parts as

Z(N ×N )· a = ZNF(N ×N ) · a + Z FF

(N ×N )· a. (2.28)

In (2.28), ZNF

(N ×N ) denotes the near-field interactions (NFI) matrix, whereas

ZFF(N ×N ) stands for the far-field interactions (FFI) matrix. The NFI matrix is calculated directly using the ordinary MOM and stored in memory in a data-sparse format. MLFMA efficiently performs the multiplication of the coefficient vector and the FFI matrix in three stages: aggregation, translation, and disaggre-gation. In MVM, these stages are performed on the tree structure in a multilevel scheme [16].

In the aggregation stage, the radiated fields of clusters are calculated from the bottom of the tree structure to the highest level. At the lowest level, the radiated field of a cluster is achieved by adding the radiation patterns of the basis functions inside each cluster. These radiation patterns are computed and stored in memory in the setup phase of the program. At higher levels, the radiated field of each cluster originates from the fields of its sub-clusters. These radiated fields are translated into incoming fields in the translation stage. Finally, in the disaggregation stage, the total incoming fields at the cluster centers are calculated from the top of the tree to the lowest level. In the lowest level, the incoming fields are received by testing functions [28].

In MLFMA, all fields are sampled on the unit sphere as a function of θ and φ. The number of samples required for each cluster is proportional to the size of the cluster and depends on the desired number of accurate digits. Because we set the minimum cluster size at the lowest level (as 0.25λ), the cluster size at level ` is a` = 2`−3λ. The total number of sampling points is of O(T`2). In the worst-case

scenario, T` can be computed via

T` ≈ 1.73ka`+ 2.16(d0)2/3(ka`)1/3, (2.29)

(29)

2.6

Surface Operators and Secondary Fields

To formulate scattering and radiation problems, three different operators are defined: T {X}(r) = ik Z S dr0X(r0) + 1 k2∇ 0· X(r0)∇g(r, r0 ), (2.30) K{X}(r) = Z S dr0X(r0) × ∇0g(r, r0) and (2.31) I{X}(r) = X(r), (2.32)

where S is the surface of the 3-D object with an arbitrary shape. In (2.30)-(2.32), X is either the equivalent electric current (J ) or the equivalent magnetic current (M ) on the surface, k = ω√µ is the wavenumber, and g(r, r0) denotes the scalar Green’s function. Decomposing the operator K in (2.31) into its principal value and limit parts yields

K{X}(r) = KPV{X}(r) −

Ωi

4πI

×n{X}(r),

(2.33) where Ωi denotes the internal solid angle, and

I×n{X}(r) = ˆn × X(r). (2.34)

In (2.34), ˆn is the outward unit vector. By imposing the boundary condition on the electric field (E), the radiated electric field can be calculated as

E(r) = ηT {J }(r) − KPV{M }(r) +

Ωi

4πI

×n{M }(r),

(2.35) and a dual expression results by doing the same on the magnetic field (H):

H(r) = 1 ηT {M }(r) + KPV{J }(r) − Ωi 4πI ×n{J }(r), (2.36) where η is the wave impedance. In (2.35) and (2.36), J and M are the equivalent surface currents, defined as

J (r) = I×n{H}(r) = ˆn × H(r) and (2.37)

M (r) = −I×n{E}(r) = −ˆn × E(r). (2.38) The fields in (2.35) and (2.36) can be expressed in terms of the spherical compo-nents or can be referred to a Cartesian system [22].

(30)

2.7

Radar Cross Section

Once we solve the unknown coefficients using MLFMA, the induced surface elec-tric currents are determined. We can compute the secondary elecelec-tric field (the scattered field) on the surface of a PEC scatterer using (2.35):

Es(r) = ηT {J }(r). (2.39)

Then, the copolar and cross-polar components of the RCS are computed as   σφφ σφθ σθφ σθθ  = lim r→∞4πr 2    Esφ2(r) Ei φ2(r) Esφ2(r) Ei θ2(r) Es θ2(r) Eiφ2(r) Es θ2(r) Eiθ2(r)   , (2.40)

where Ei(r) is the incident field, and the scattered field Es(r) is computed in the far zone of the target [4].

The unit of the RCS is area, and one the most common references is one meter squared. Therefore, a designation is dB/m2 (or dBsm) for a 3-D RCS. When the transmitter and receiver are at the same location, the RCS is usually referred to as monostatic (or back-scattered), and as bistatic when the two are at different locations. A plot of the RCS as a function of the space coordinates is usually referred to as the RCS pattern [29].

2.8

Angular Resolution

The minimum number of monostatic observation angles to obtain a full represen-tation of back-scattered fields can be determined by the Nyquist-Shannon sam-pling theorem. The maximum angular resolution just depends on the geometry of object. For an object with a maximum diameter of d, the maximum angular resolution can be calculated via

∆θmax= ∆φmax=

λ

2d, (2.41)

where λ is the corresponding wavelength of the incident plane wave. In some applications, we require a specific angular response resolution, which means a

(31)

higher sampling rate of incident angles is needed. Hence, we enforce high linear dependencies between columns of the RHS matrix [30].

(32)

Chapter 3

Implementation

3.1

Adaptive Cross Approximation

The ACA algorithm [14] takes advantage of the rank-deficient nature of matrices that contain linear dependencies and computes a low-rank decomposition succes-sively with a finite number of rank-1 approximations. Hence, a matrix of this type can be approximated by a low-rank representation [19]. A schematic view of the ACA algorithm is illustrated in Fig. 3.1.

SVD has a computational complexity of O(N3), where N is the number of

unknowns. The considered full matrix must also be computed and stored explic-itly [15].

One main advantage of the ACA algorithm is its purely algebraic nature. As opposed to SVD, development and implementation of ACA do not depend on the full knowledge of the matrix [32]. Moreover, due to its algebraic nature, ACA can be modular and very easily integrated into existing codes. The complexity of ACA is O(k2(m + n)) and needs only k(n + m) elements to be calculated.

(33)

Figure 3.1: A schematic view of the ACA algorithm.

into

A(m×n)= U(m×k)· V(k×n) + R(m×n), (3.1)

where U ∈ Cm×k, V ∈ Ck×n, and R ∈ Cm×n is the error matrix derived from the

approximation. The decomposition provided in (3.1) approximates A with ˜A by multiplying two matrices of rank at most k. Details of the ACA algorithm are as follows:

Initialize the first row index, I1, and set ˜A = 0. Then, initialize the first row of

the approximate error matrix as ˜

R(I1, :) = A(I1, :). (3.2)

The first column index, J1, should be chosen as the column index of the maximum

element of ˜R(I1, :). Normalizing ˜R(I1, :) with ˜R(I1, J1) results in the first row of

V , i.e.,

V (1, :) = ˜R(I1, :)/ ˜R(I1, J1). (3.3)

Fill the first column of the approximate error matrix using the J1th column of

(34)

Algorithm 1 Partially Pivoted Adaptive Cross Approximation Let k := 1; F := 0 repeat find ik ˜ vk:= A(m×n)(ik, 1 : n) for ` := 1, ..., k − 1 do ˜vk := ˜vk− (u`(ik))v` F := F ∪ {ik}

if ˜vk does not vanish then

jk:= argmaxj=1,...,n|˜vk(j)| vk := (˜vk)(jk) −1 ˜ vk uk:= A(m×n)(1 : m, jk) for ` := 1...k − 1 do ˜vk := uk− (v`(jk))u` end for k := k + 1 end if end for

until the stoping criterion (3.6) is fulfilled or F = {1, ..., m}.

the maximum element in the first column of the approximate error matrix. We should calculate the norm of the two rank-1 approximants of A and check the stopping criterion. If the criterion is not satisfied, the iterations will start. At the rth iteration, we update the Irth row of the approximate error matrix as

˜ R(Ir, :) = A(Ir, :) − r−1 X `=1 U (Ir, `) · V (`, :). (3.4)

The index of the maximum element of ˜R(Ir, :) is Jr. Similar to the initialization

step, the rth row of V is the Irth row of the approximate matrix that is normalized

by ˜R(Ir, :)/ ˜R(Ir, Jr). In the next step, the Jrth column of U is updated and we

obtain the rth column of U , i.e.,

U (:, Jr) = ˜R(:, Jr) = A(:, Jr) − r−1

X

`=1

V (`, Jr) · U (:, `). (3.5)

The stopping criterion controls wheter the condition

||U (:, Jr) · V (Ir, :)|| ≤ || ˜A(r)|| (3.6)

(35)

Frobenius norm and || ˜A|| is computed as || ˜A(r)||2 = || ˜A(r−1)||2 (3.7) + 2 r−1 X j=1 |UH(:, j) · U (:, r)| · |VH(j, :) · V (r, :)| + ||U (:, r)||2· ||V (r, :)||2.

The iterations continue until the criterion in (3.6) is fulfilled. In each iteration, we choose rows and columns of A that are different from previous iterations.

The ACA algorithm terminates when the error matrix (not the approximate error matrix) at the kth iteration has a norm less than the product of the norm of A and a prescribed threshold , i.e.,

||R|| ≤ ||A||. (3.8)

The criterion defined in (3.8) requires complete knowledge of A, which is why we replace this criterion with the one in (3.6). In this way, a low-rank approximation of the original matrix is obtained by requiring only partial knowledge of A. In other words, the memory and the CPU time scale as O(k(m + n)) and O(k2(m +

n)), respectively. The efficient rank k will be further reduced by means of the recompression technique [32].

3.2

QR Decomposition

The QR factorization of a matrix A ∈ Cm×n, with m ≥ n, is given by

A = Q · R = Q · " R1 0 # , (3.9)

where Q ∈ Cm×m is a unitary matrix, R ∈ Cm×n, and R

1 ∈ Cn×n is an upper

triangular matrix. If A is non-singular, then R1is of full rank n. The factorization

in (3.9) can be represented as A =hQ1 Q2 i · " R1 0 # = Q1· R1, (3.10)

(36)

where Q1 ∈ Cm×n consists of the first n columns of Q.

The matrices Q and R can be constructed by means of Householder transfor-mations. It is known that the Householder method is numerically stable because of the orthogonalization process. The unitary matrix Q can be written as a product of the elementary Householder matrices:

Q = H1· H2· · · Hs, (3.11)

where s = min (n − 1, m − 1). In (3.11), Hi (i ∈ {1, 2, · · · , s}) is an unitary

matrix of the form

Hi = In−

2 ||v||2vv

H, (3.12)

where v is a column vector (called the Householder vector). For a rank-deficient matrix A, QR factorization will not provide a basis for the range of A. To overcome this problem, a column-permuted version of A can be factorized [15].

3.3

Singular Value Decomposition

The SVD of the matrix A ∈ Cm×n is

A(m×n)= UΣ(m×m)· Σ(m×n)· VHΣ(n×n), (3.13)

where UΣ ∈ Cm×m and VΣ ∈ Cn×n are unitary matrices. In (3.13), Σ ∈ Cm×n is

Σ = diag(σ1, · · · , σp), (3.14)

where p = min{m, n} and σ1 ≥ σ2 ≥ · · · ≥ σp ≥ 0. Defining q as the rank of A

by

σ1 ≥ · · · ≥ σq > σq+1 = · · · = σp = 0, (3.15)

the SVD of A can be represented as the summation of the outer product of q rank-1 matrices. The summation can be truncated by k0 < q = rank(A) as

˜ A = k0 X i=1 σiuivHi , (3.16)

(37)

where ˜A is the low-rank approximation of A with an effective rank k0.

The matrix ˜A is called a truncation of A to rank k0. We will refer to this truncation operation as a reduced SVD (RSVD). The SVD method is not a practical approach for computing low-rank approximations because it has a high computational cost and requires complete knowledge of the all matrix elements. However, we will take advantage of SVD to recompress the matrices obtained from ACA [15].

3.4

Recompression Technique

In this section, we present a recompression technique that reduces the required amount of storage and hence accelerates the solution of the problem. Although ACA generates high-quality approximations, the amount of required storage can still be reduced [33, 34].

Using RSVD, we present an efficient algorithm (Algorithm 2) for the re-compression of a low-rank matrix A(m×n) ≈ U(m×k) · V(k×n), such that

˜

A(m×n) ≈ U˜(m×k0) · V˜(k0×n) with rank k0 is less than k. In other words,

rank k0 will be replaced by rank k, obtained from ACA, by comparing the singu-lar values to the given threshold SVD. The method uses the QR decomposition

and calculates the SVD in a very efficient manner [33]. Assume we have computed the QR decompositions

U = QU · RU (3.17)

and

V = RHV · QH

V (3.18)

of U ∈ Cn×k and V ∈ Ck×m, respectively. According to [15], this can be done

with 4k2(m + n) − 8 3k

3 operations. The outer product R

U· RHV of the two upper

triangular matrices RU and RV needs k3(2k2+112 k −1) operations [33]. The result

of the outer product is then decomposed using SVD into

(38)

Algorithm 2 Recompression Algorithm procedure RECOMPRESS(U , V , )

Compute a truncated QR factorization of U and V : U = QU · RU; VH = Q V · RV; ˆ R = RU· RHV; Compute an SVD of ˆR: ˆ R = ˆU · ˆΣ · ˆVH; k0 = 1; while σk0 > σ1 do k0 = k0+ 1 end while URACA (m×k0) = QU · ˆU ; VRACA,H(n×k0) = QV · ˆV · ˆΣ; end procedure

where the cost of the above operation is 22k3. Note that Q

U· ˆU and QV · ˆV both

have orthonormal columns, and hence the resulting decomposition

A ≈ U · V = (QU · ˆU ) · ˆΣ · (QV · ˆV )H (3.20)

is an SVD of A. Together with the multiplications of QU · ˆU and QV · ˆV ,

which require k(2k − 1)(m + n) operations, the number of operations sum up to approximately 6k2(m + n) + 20k3 operations. As a conclusion, the recompressed

low-rank approximation of the matrix A(m×n) can be computed within O(k2(m +

n) + k3) operations; this is much lower than the cost of computing the SVD of

A(m×n) with 14mn2+ 8n3 complex operations [15]. After all these computations,

an RSVD will be applied to replace rank k with a smaller rank k0. After this recompression the final low-rank approximation will be in the form of

(39)

3.5

Application in Multiple Right-Hand Side

Problems

As discussed earlier, SIEs can be converted into a matrix equation for multiple excitation problems using MOM. The matrix equation can be represented as

Z(N ×N ) · X(N ×M )= V(N ×M ), (3.22)

where Z ∈ CN ×N is the matrix of interactions between N testing and basis functions, X ∈ CN ×M contains unknown coefficient vectors corresponding to

the excitation vectors in the RHS matrix V ∈ CN ×M, and M is the number of

excitations. For the case of multiple 2-D MRCSs, the geometry is illuminated by M plane waves from different spherical directions (θi, φi), where θi ∈ [θmin, θmax]

and φi ∈ [φmin, φmax]. The solution of (3.22) for one RHS can be accelerated

by MLFMA. Calculating multiple MRCSs requires multiple solutions. Despite the reduced complexity of MLFMA, the solutions for multiple RHSs is a time-consuming process [16].

As discussed in Section 2.8, in some applications we require a specific angu-lar response resolution, which means a higher sampling rate of incident angles. Hence, we enforce high linear dependencies between columns of the RHS matrix in (3.22). Using this fact, we can reduce the number of solutions from M to k0 that are defined as the effective rank of V .

Various techniques can be incorporated to obtain a low-rank approximation of the RHS matrix because of its rank-deficient nature. One method is to use SVD [7]. For a given threshold, SVD results in a low-rank approximation of a matrix with the minimum rank [15]. Another technique is to use rank-revealing QR decomposition [6]. Both methods have two main disadvantages: they have a high computational cost and require full knowledge of the RHS matrix elements. An alternative technique for finding the low-rank approximation of a matrix is ACA [13], which only requires partial knowledge of the matrix. Furthermore, the computational cost of ACA is much less than the SV and QR decompositions. For the RHS matrix, V(N ×M ), the memory usage complexities of QR factorization

(40)

and SVD scales as O(N M ) because these methods require complete knowledge of V . However, ACA has a memory complexity of O(k(M + N )), where k is the effective rank of V obtained by ACA and k < min (N, M ). The computational complexities of SV and QR decompositions are O(max(N, M )M2). The required

operations for ACA scales as O(k2(M + N )).

Despite the fact that SVD leads to a minimum-rank approximation [15], it can be applied only on relatively small problems due to its high computational cost. Compressing the RHS matrix using ACA will take much less time but there will be some redundant columns in the resulting decomposed matrices. We can apply a recompression technique to reach a decomposition of a lower rank k0 compared to k, obtained from ACA.

For a given threshold , we use ACA to compress and factorize the RHS matrix. Applying ACA on V leads to

V(N ×M )≈ A(N ×k)· B(k×M ), (3.23)

where A(N ×k) ∈ CN ×k, B(M ×k)∈ CM ×k, and k denotes the effective rank of V . To

obtain a decomposition of V , with an effective rank k0 less than k, we employ the recompression technique in Algorithm 2. The method uses the QR decomposition and calculates SVD in an efficient manner. We apply a QR factorization for A to get

A(N ×k) = QA(N ×k)· RA(k×k), (3.24)

and for B to obtain

BH(M ×k) = QB (M ×k)· RB (k×k), (3.25)

where matrices QA and QB are unitary matrices. Substituting (3.24) and (3.25)

in (3.23) leads to

V(N ×M ) ≈ QA(N ×k)· RA(k×k)· RBH(k×k)· QBH(M ×k). (3.26)

Then, we perform a truncated SVD with a given threshold  on the product of RA and RHB so that (3.23) becomes

V ≈ A · B ≈ QA· ˆA · QB· ˆB · ˆΣH

(41)

and we have already accomplished the recompression process. By defining ARACA(N ×k0)= QA· ˆA (3.28)

and

BRACA,H(k0×M ) = QB· ˆB · ˆΣH, (3.29)

the desired decomposition is achieved. After this recompression, the final low-rank approximation will be in the form of

V(N ×M )≈ ARACA(N ×k0)· BRACA(k0×M ), (3.30)

where k0 is the effective rank of the RHS matrix and is less than the rank k achieved by ACA.

The proposed recompression technique can be computed within O(k2(N + M ) + k3) operations. This way, we have computed the low-rank approximation of

V with a lower complexity compared to SVD. Furthermore, the achieved effective rank, k0, is very close to the rank that we could obtain from ordinary SVD.

We can utilize the low-rank approximation of the RHS matrix, which we obtained in (3.30), to accelerate the solution of the matrix equation in (3.22). An approximate solution to (3.22) can be achieved by substituting (3.30) in (3.22) and rewriting it as

X ≈ Z−1· A · B, (3.31)

where Z−1 denotes a standard solution, i.e., MLFMA in this work, but other fast methods can also be used to accelerate the solution [16, 17, 18, 19]. The main advantage of the formulation in (3.31) is that it reduces the solution time immensely, because the number of solutions decreases from M to k0. In other words, after solving

ˆ

X = Z−1· A, (3.32)

where ˆX ∈ CN ×k0, we can construct X from

(42)

3.6

Review of Other Methods

In this section, we will review other methods for the fast solution of multiple radar cross section problems. In a major classification, we can divide the solu-tions into three types: algebraic methods, function approximation methods, and interpolation techniques. The provided method in this thesis, i.e., RACA, is a purely algebraic method.

3.6.1

Algebraic Methods

Algebraic methods benefit from redundancies in the RHS matrix. These methods are essentially based on the compression of the excitation matrix assembled over all plane wave incident angles using algebraic techniques. Removing the redun-dancies in the excitation matrix using QR decomposition [6], SVD [7], ID [8], and ACA [13] or its recompressed version, RACA, leads to fast RCS computations. As stated in (3.22), for problems with multiple excitations, the matrix equation can be written as

Z(N ×N ) · X(N ×M )= V(N ×M ). (3.34)

Because all algebraic methods compress the RHS matrix, we will provide a new formulation for (3.34) using low-rank approximations of V . Hence, an approxi-mate solution to (3.34) can be achieved via

X ≈ Z−1· A · BH, (3.35)

where Z−1 denotes a standard solution, i.e., MLFMA in this work. In (3.35), matrices A and B are obtained from applying a compression method.

3.6.1.1 QR Decomposition

As discussed above, the vectors that span the column space of V are much lower than the columns in V due to redundancies in V . Using this fact, the number of required matrix solutions reduces from M to r. The effective rank r depends on

(43)

the column rank of V . Consider a rank revealing QR factorization (RRQR) [35] of V with a predefined tolerance, which can be written as

V ≈ Q · R. (3.36)

The matrix equation in (3.34) reduces to (3.35), where A = Q and BH = R.

The columns of Q are orthonormal and approximately span the column space of V . The modified Gram-Schmit algorithm can be used to obtain the low-rank approximation in (3.36) [6]. Although the method reduces the number of required solutions, it has a high computational cost compared to RACA. Furthermore, all elements of V must be available, which increases memory consumption.

3.6.1.2 Singular Value Decomposition

The approximate form in (3.35) can be achieved by applying SVD to the RHS matrix V . After employing SVD, the RHS matrix will be decomposed into

V = U · (S · F ), (3.37)

where U is an N ×N unitary matrix, the matrix S is an N ×M orthogonal matrix, and F is an M × M unitary matrix. By using the Eckart-Young theorem, V can be approximated by

V ≈ A · BH = Uk· (Sk· Fk), (3.38)

where the matrix S contains first k singular values [15]. Hence, the efficient rank of V , or the rank of matrices A and B, is equal to k, which means only k solutions are required, instead of M solutions (k < M ). The benefits of solving (3.35) instead of (3.34) depends mainly on the ratio of the effective rank k to the number of plane wave excitations M . For a given threshold , RSVD results in a very good low-rank approximation but at the cost of high computational complexity. Like RRQR, this method suffers from dependence on all the matrix elements, which increases the memory complexity [7].

(44)

3.6.1.3 Adaptive Cross Approximation

To reduce the computational effort of solving (3.34), we have to find the low-rank approximation of V . As discussed in this paper, one efficient method to obtain a low-rank decomposition of a matrix is ACA. We can bring ACA into play to achieve the following approximation for a given threshold :

V(N ×M )≈ A(N ×k)· BH(k×M ), (3.39)

where A(N ×k) ∈ CN ×k, B(M ×k) ∈ CM ×k, and k denotes the effective rank of V .

The complexity of this algorithm is O(k2(N + M )). The relation k < M holds for

rank-deficient matrices and makes the ACA algorithm efficient compared to the SVD of RRQR. Nevertheless, the effective rank obtained from ACA is less than that of SVD because ACA generates low-rank approximations using a limited number of rank-1 approximations and requires only partial knowledge of the elements of V [13]. In this work, we present a recompression technique that leads to a low-rank approximation with an effective rank, which is comparable to that of SVD. The complexity of RACA and ACA are almost the same.

3.6.1.4 Interpolative Decomposition

A low-rank approximation of V can be achieved by utilizing the ID and skeleton concepts. The accuracy of approximation obtained from the ID algorithm can be controlled by a threshold. In the ID algorithm, a matrix A consists of a subset of columns of V . There also exists a complex matrix BH with the following properties:

• Some subset of the columns of BH creates the identity matrix.

• All elements of BH have an absolute value less than or equal to 1.

• The kth singular value of BH is at least one.

• ||V − A · BH||

2 ≤ σk+1pk(n − k) + 1, where σk+1 is the (k + 1)st singular

(45)

The last statement provides a low-rank approximation of V [36]. Once we com-pute the low-rank decomposition, the matrix equation in (3.34) will reduce to (3.35). The complexity of the ID algorithm is less than the complexity of SVD, but the former has a higher computational complexity compared to the ACA or RACA algorithms. ID skeletonization is a sub-optimal algorithm [8] and the resulting effective rank is higher than the ranks that can be obtained using the RSVD or RACA algorithms.

3.6.2

Rational Function Approximation

One efficient approach to the solution of RCS problems over a wide range of incident plane waves is the rational function approximation technique. There are three main approaches for creating approximated rational functions: asymptotic waveform evaluation (AWE) [9, 10], the Pad´e approximation, and model-based parameter estimation (MBPE) [11, 12]. We will briefly describe each method in the following sections.

3.6.2.1 Asymptotic Waveform Evaluation

This method was first developed for time-domain analysis. It has been applied to MOM for the fast solution of electromagnetics problems. The matrix equation for any frequency can be written as

Z(k) · a(k) = v(k), (3.40)

where k is the wavenumber. For a given frequency f0 and its corresponding

wavenumber k0, the solution of (3.40), i.e., a(k0), is obtained. The essential idea

behind AWE is expanding a(k) into a Taylor series, such as a(k) = a(k0) + a(1)(k0)(k − k0) +

a(2)(k

0)(k − k0)2

2! + · · · , (3.41)

where a(n)(k

0) is the nth derivative of a(k), evaluated at k0. (3.41) can be written

as a(k) = ∞ X n=0 qn(k − k0)n. (3.42)

(46)

In (3.42), qn is the nth moment and can be computed via qn= a (n)(k 0) n! , n = 0, 1, 2, · · · , (3.43) with a(0)(k

0) = a(k0). Combining (3.40), (3.42), and (3.43), a recursive

relation-ship to compute the nth moment, qn, can be written as

qn= Z−1(k0) " v(n)(k0) n! − n X m=0 (1 − δm)Z(m)(k0)qn−m m! # , (3.44)

where the Kronecker delta is defined as

δm =    1, m = 0 0, m 6= 0. (3.45)

In (3.44), Z(n)(k0) and v(n)(k0) are the nth derivatives of Z and v at the frequency

f0. To find the solution at any desired frequency (i.e., in an acceptable range),

we can use the moments defined in (3.44) to find the solution vector [9].

3.6.2.2 Pad´e Approximation

In most cases, the Taylor series expansion provides good results. However, the accuracy of this expansion is limited to a radius of convergence. To improve the accuracy of the solution, we replace the Taylor series expansion with the Pad´e approximation by matching it with a rational function, which can be shown as

a(k) = L+M X n=0 qn(k − k0)n= PL i=0xi(k − k0)i PM j=0bj(k − k0)j + 1 . (3.46)

The details of computing the coefficients can be found in [10]. The rational function has a smaller error in comparison to polynomial approximation [10].

3.6.2.3 Model-Based Parameter Estimation

Alternatively, matching the derivatives of a(k) at the frequency f0 results in the

(47)

can be found in [11]. It can be shown that both the Pad´e approximation and MBPE are the same. The Pad´e approximation can be obtained using the Taylor series expansion and MBPE can be achieved by matching the the derivatives of the function [12].

3.6.2.4 Cubic-Spline Interpolation

The cubic-spline interpolation is a numerical approximation method. It can be used to approximate the MRCS curve on a set of control points. Unlike AWE, derivatives of the impedance matrix are not required. The cubic spline is con-structed of third-order polynomials, which cross a set of nonuniform sampling nodes. Commonly, the second derivative is set to zero at the endpoints. In this way, we can generate a “natural” cubic spline. The main idea of this method is to interpolate the solution matrix [5], which is obtained from the solution of the matrix equation for selected excitations. One main drawback of this method is its sampling of the RHS matrix, which makes the method error uncontrollable.

(48)

Chapter 4

Numerical Results

We performed three numerical experiments to exhibit the efficiency and the ac-curacy of the proposed method. All simulations are performed on a server with two quad-core Intel Xeon X5355 processors with a clock rate of 2.66 GHz and 16 GB memory. The threshold for a bi-conjugate gradient stabilized (BiCGSTAB) solver [37] is set to 10−4. The required MVMs in each iteration of BiCGSTAB iterative solver are accelerated by MLFMA. Brute-force (BF) multiple 2-D MRCS values are obtained from the solution of the matrix equation in (3.22), which is ac-celerated by MLFMA. The threshold for the ACA and truncated SVD algorithms is set to 10−4.

4.0.3

NASA Almond

The first experiment is performed on the NASA almond geometry [38] at a fre-quency of 1 GHz. The corresponding wavelength is 30 cm. The NASA almond has a length of 1.7 m and hence the electrical size of the largest dimension of the geometry is 5.6λ, as evident in Fig. 4.1. The surface of the NASA almond geom-etry is discretized with planar triangles of an average mesh size of 0.1λ, which leads to 8, 673 unknowns. Based on (2.41), the maximum angular resolution must

(49)

Figure 4.1: NASA almond geometry. The electrical size of the largest dimension of the geometry is 5.6λ at 1 GHz. be ∆θmax= ∆φmax= λ 2d = 0.3 2 × 1.7 = 0.08 rad = 5.05 ◦ (4.1) to obtain a full representation of back-scattered fields. In this example, we illu-minate the NASA almond with φ-polarized plane waves incident from a sector defined by [θmin, θmax] = [30◦, 60◦] and [φmin, φmax] = [45◦, 75◦]. The angular

reso-lution for the computation of a 2-D spatial response is set to 1◦. As a consequence, obtaining the BF results require 961 runs.

Figure 4.2(a) presents the 2-D spatial MRCS values correspond to the BF runs and the results obtained from RACA are illustrated in Fig. 4.2(b). The third plot in Fig. 4.2 presents the percentage of the relative error between the proposed method and the BF runs. The relative error is calculated via

Relative Error = 100 × |σBF − σRACA| |σBF|max

, (4.2)

where σBF denotes the MRCS values obtained from direct calculation and σRACA

implies the MRCS values computed via RACA. Based on (4.1), a 1◦ resolution of spherical incident angles causes linear dependencies in the RHS matrix. A major portion of the existing linear dependencies between the columns of V is removed by ACA in the first step. Table 4.1 reports the statistics of this case, where one can see that ACA reduces the required solutions from 961 to 51.

As per the earlier discussion, the low-rank approximation obtained from ACA also contains linear dependencies. The proposed recompression technique omits

(50)

the dependencies in the matrices obtained from ordinary ACA and results in a better low-rank approximation, of rank 40, which is less than 51. In other words, the truncated SVD of the RHS matrix is obtained with a low computational cost.

φ (a) 2−D MRCS (BF) 30 40 50 60 50 60 70 dBsm −50 −40 −30 −20 φ (b) 2−D MRCS (RACA) 30 40 50 60 50 60 70 dBsm −50 −40 −30 −20 θ φ

(c) Relative Error (RACA)

30 40 50 60 50 60 70 % 0.5 1 1.5

Figure 4.2: φφ-polarized 2-D MRCS of the NASA almond. 2-D MRCS (a) ob-tained from the BF runs and (b) computed using the RACA. (c) The percentage of the relative error.

To analyze the efficiency of RACA, we use different interpolation techniques to obtain 1◦ resolution from the BF runs performed with various angular intervals. The first set of interpolations are performed using the samples with a resolution

(51)

of 2◦. The required number of BF runs in this case is 16 × 16 = 256. The result of the CS interpolation is provided in Fig. 4.3(a) and the percentage of the relative error is illustrated in Fig. 4.3(b). We present the 2-D MRCS values obtained from linear and nearest-neighbor interpolation techniques in Fig. 4.3(c) and Fig. 4.3(e), respectively. The percentage of the relative error of these two interpolation methods are, respectively, provided in Fig. 4.3(d) and Fig. 4.3(f). Despite the large number of required simulations in the interpolation method, the error rate in the interpolation method is much higher than the error rate of the RACA.

Table 4.1: Statistics of the NASA Almond Simulations

Frequency 1 GHz

Number of Unknowns 8673

Number of Incident Plane Waves 961

ACA Threshold 10−4 SVD Threshold 10−4 k in ACA 51 Compression (%) 94.6 k0 of RACA 40 Compression (%) 95.6

Time for BF Run (s) 5, 679.6 Time for RACA Run (s) 376.2

(52)

φ

(a) 2−D MRCS (Cubic Spline)

30 40 50 60 50 60 70 dBsm −50 −40 −30 −20

(b) Relative Error (Cubic Spline)

30 40 50 60 50 60 70 % 0 20 40 60 80 φ (c) 2−D MRCS (Linear) 30 40 50 60 50 60 70 dBsm −40 −35 −30 −25 −20 −15

(d) Relative Error (Linear)

30 40 50 60 50 60 70 % 0 20 40 60 80 θ φ (e) 2−D MRCS (Near.) 30 40 50 60 50 60 70 dBsm −40 −35 −30 −25 −20 −15 θ

(f) Relative Error (Near.)

30 40 50 60 50 60 70 % 0 20 40 60 80 Interpolation of Data with 2° Intervals

Figure 4.3: φφ-polarized 2-D MRCS values of the NASA almond obtained us-ing different interpolation methods. The plots have a resolution of 1◦ and the sampling interval is 2◦. (a) 2-D MRCS values obtained from the CS interpolation method. (b) The relative error in percents for the CS interpolation method. (c) 2-D MRCS values resulting from linear interpolation method. (d) The relative error in percents corresponding to the linear interpolation method. (e) 2-D MRCS val-ues computed using the nearest-neighbor interpolation technique. (f) The relative error in percents for the nearest-neighbor interpolation method.

Another set of simulations are performed using the samples with 3◦ angular resolution and the results provided in Fig. 4.4. Since we use less samples compared to the data with 2◦ resolution, the interpolation lead to higher error rate. The number of required BF runs for this case is 121.

(53)

φ

(a) 2−D MRCS (Cubic Spline)

30 40 50 60 50 60 70 dBsm −50 −40 −30 −20

(b) Relative Error (Cubic Spline)

30 40 50 60 50 60 70 % 0 20 40 60 φ (c) 2−D MRCS (Linear) 30 40 50 60 50 60 70 dBsm −40 −35 −30 −25 −20 −15

(d) Relative Error (Linear)

30 40 50 60 50 60 70 % 0 20 40 60 θ φ (e) 2−D MRCS (Near.) 30 40 50 60 50 60 70 dBsm −40 −35 −30 −25 −20 −15 θ

(f) Relative Error (Near.)

30 40 50 60 50 60 70 % 0 20 40 60 Interpolation of Data with 3° Intervals

Figure 4.4: φφ-polarized 2-D MRCS values of the NASA almond obtained us-ing different interpolation methods. The plots have a resolution of 1◦ and the sampling interval is 3◦. (a) 2-D MRCS values obtained from the CS interpolation method. (b) The relative error in percents for the CS interpolation method. (c) 2-D MRCS values resulting from linear interpolation method. (d) The relative error in percents corresponding to the linear interpolation method. (e) 2-D MRCS val-ues computed using the nearest-neighbor interpolation technique. (f) The relative error in percents for the nearest-neighbor interpolation method.

We also carry out a set of interpolation simulations with the sampling interval of 5◦. As evident in Fig. 4.5, the 2-D MRCS pattern do not agree with the pattern obtained from BF runs (Fig. 4.2). The number of required solutions for this interpolation is 25 that is comparable with that of the RACA (i.e., 40). However,

(54)

the error rate in the interpolation method is not acceptable that testifies the accuracy and the efficiency of RACA.

φ

(a) 2−D MRCS (Cubic Spline)

30 40 50 60 50 60 70 dBsm −50 −40 −30 −20

(b) Relative Error (Cubic Spline)

30 40 50 60 50 60 70 % 0 20 40 60 φ (c) 2−D MRCS (Linear) 30 40 50 60 50 60 70 dBsm −35 −30 −25 −20 −15

(d) Relative Error (Linear)

30 40 50 60 50 60 70 % 0 20 40 60 θ φ (e) 2−D MRCS (Near.) 30 40 50 60 50 60 70 dBsm −35 −30 −25 −20 −15 θ

(f) Relative Error (Near.)

30 40 50 60 50 60 70 % 0 20 40 60 80 Interpolation of Data with 5° Intervals

Figure 4.5: φφ-polarized 2-D MRCS values of the NASA almond obtained using different interpolation methods. The plots have a resolution of 1◦ and the sam-pling interval is 5◦. 2-D MRCS values obtained from (a) the CS interpolation method, (c) the linear interpolation method, (e) the nearest-neighbor interpola-tion technique and corresponding relative percentage of errors (b), (d) and (f), respectively.

All the mentioned interpolations are performed with an angular resolution less than the maximum angular resolution provided in (4.1). Figure 4.6 illustrates the results of three different interpolations with a 10◦ sampling resolution. As

(55)

we would expect, the results do not agree with the 2-D MRCS values in Fig. 4.2, since the Nyquist rate is not satisfied for sampling.

Figure 4.6: φφ-polarized 2-D MRCS values of the NASA almond obtained us-ing different interpolation methods. The plots have a resolution of 1◦ and the sampling interval is 10◦. 2-D MRCS values obtained from (a) the CS interpo-lation method and (b) corresponding relative percentage of error, (c) the linear interpolation method and (d) corresponding relative percentage of error, (e) the nearest-neighbor interpolation technique and (f) corresponding relative percent-age of error.

Şekil

Figure 2.1: The RWG function defined on triangular domains.
Figure 3.1: A schematic view of the ACA algorithm.
Figure 4.1: NASA almond geometry. The electrical size of the largest dimension of the geometry is 5.6λ at 1 GHz
Figure 4.2: φφ-polarized 2-D MRCS of the NASA almond. 2-D MRCS (a) ob- ob-tained from the BF runs and (b) computed using the RACA
+7

Referanslar

Benzer Belgeler

Urolithiasis in ankylosing spondylitis: Correlation with Bath ankylosing spondylitis disease activity index (BASDAI), Bath ankylosing spondylitis functional index (BASFI) and

Very recently, a deep sub-wavelength (sub λ/100) nanocavity design has been proposed to achieve perfect narrowband light absorption in the mid-to-far infrared range [ 43 ].. Here,

faces due to fabrication imperfections or in the monolayer formation would result in larger EF values. [ 13 ] Note that since the same objective is used in both Raman and

The history which modern historical drama puts on the stage is one in which the conflict between vicarius Christi and vicarius Pilati has varied according to how

Journal of Applied Research in Higher Education Impact of behavioral integrity on workplace ostracism: The moderating roles of narcissistic personality and psychological

However, it is clear that since the photosensitizers are in the caged form (PS-Q), they are not capable of producing singlet oxygen, but once the photosensitizers are inside the

22 Plato, Nietzsche and sublimalion.. ] reason as Plato conceives it will decide for the whole soul in a way that does not take the ends of the other parts as given

Specifically, we propose a data leak- age free by design approach that analyzes the access control matrix to identify “potential” unauthorized flows and eliminates them by