\input siamtex.sty \overfullrule=0pt \topmatter \vol{8} \no{5, pp. 1--12} \date{January 1988} %%%\copyyear{1999} \code{000} \title This is my title of this paper\\and this is the second line\endtitle %%%\shorttitle{This is the short version of the title} \recdate{*}{April 14, 1986; accepted for publication (in revised form) January 26, 1987. This work was supported by the U.S. Department of Energy under contract X-000-XXX-00 and by the Air Force Office of Scientific Research under contract ABCDE-12-3456} \author Ralph Youngen\fnmark{$\dag$} \and Janene Winter\fnmark{$\ddag$}\endauthor %%%\shortauthor{R. Youngen \and J. Winter} \address{$\dag$}{American Mathematical Society, 201 Charles St., Providence, RI 02904} \address{$\ddag$}{American Mathematical Society, Box 6248, Providence, RI 02940} \dedicate{Dedicated to someone.} \abstract{ Previously considered multigrid methods for problems with discontinuous coefficients have defined the coarse grid operators by Galerkin coarsening; in the second method, such coarsening is also used but with auxiliary intermediate grids obtained by semicoarsening successively in each of the three independent variables; this artifice simplifies coding considerably and results in comparable convergence.} \keywords circle, square\endkeywords \subjclass xxx, yyy\endsubjclass \endtopmatter \heading{1}{Introduction} The subject of this paper is a multigrid method for the\break MM numerical solution of $$- \nabla \cdot (D(x,y,z) \nabla U(x,y,z))+\sigma(x,y,z)U(x,y,z) =F(x,y,z),\ (x,y,z)\in\Omega,$$ $$\nu (x,y,z)\cdot D(x,y,z)\nabla U(x,y,z)+\gamma (x,y,z)U(x,y,z)=0,\ (x,y,z)\in \partial \Omega,$$ where $\Omega$ is a bounded region in $R^3$ with boundary $\partial \Omega$, $D=(D^1,D^2,D^3)$, $D^i$ is positive, i=1,2,3, $\sigma$ and $\gamma$ are nonnegative, and $D^i$, $i=1,2,3$,\ $\sigma$,\ and $F$ are allowed to be discontinuous across internal boundaries of $\Omega$. This subject was previously considered in $[$ABDP$]$,$[$D1$]$ for two-dimensional problems and was extended to nonsymmetric problems and systems in $[$D2$]$ and $[$D3$]$, respectively. \thm{Theorem 1.1} The method in {\rm [D1]} was extended to three dimensions in {\rm [BF]} and {\rm [S]}. For the standard multigrid coarsening (in which, for a given grid, the next coarser grid has $1/8$ as many points), anisotropic problems require plane relaxation to obtain a good smoothing factor. In {\rm [BF]} this plane relaxation was performed by ICCG (an incomplete Cholesky conjugate gradient method), whereas in {\rm [S]} only alternating line relaxation was used. The other relevant reference is {\rm [T]}, in which plane relaxation is performed by a two-dimensional multigrid method; this paper considers only constant coefficient problems but is nevertheless an interesting and thorough study of the pathologies that can occur. \endthm \prf{Proof} In this paper we consider two methods. The first method is basically the method considered in [BF] with two differences: first, we perform plane relaxation by a two-dimensional multigrid method, and second, we use a slightly different choice of interpolation operator, which improves performance for nearly singular problems. In the second method coarsening is done by successively coarsening in each of the three independent variables and then ignoring the intermediate grids; this artifice simplifies coding considerably. \dfn{Definition 1.1} We describe the two methods in \S\ 2. In \S\ 3 we discuss some remaining details. In \S\ 4 we present some numerical experiments, and in \S\ 5 we present some conclusions. \heading{2}{The two methods} In the multigrid method one attempts to solve a discrete approximation $$ L^MU^M=F^M \leqno(2.1) $$ to a continuous equation $$ LU=F.$$ To do this one constructs a sequence of grids $G^1, \cdots ,G^M$ with corresponding mesh sizes $h_1 > \cdots > h_M$ (usually $h_k=2h_{k+1}$). In its simplest mode of operation, one does a fixed number, $IM$, of relaxation sweeps on Eq. $(2.1)$ and then drops down to grid $G^{M-1}$ and the equation $$ L^{M-1}V^{M-1}=f^{M-1}=I_{M}^{M-1}(F^M-L^Mv^M), \leqno(2.2) $$ where $V^{M-1}$ is to be the coarse grid approximation to $V^M \equiv U^M-u^M$, where $v^M=u^M$ is the last iterate on grid $G^M$, and where $I_{M}^{M-1}$ is an interpolation operator from $G^M$ to $G^{M-1}$. To solve Eq. $(2.2)$ approximately, one resorts to recursion, taking $ID$ relaxation sweeps on grid $G^k$ before dropping down to grid $G^{k-1}$, $M-1 \geq k \geq 2$ and the equation $$ L^{k-1}V^{k-1}=f^{k-1}=I_{k}^{k-1}(f^k-L^{k}v^k). $$ When grid $G^1$ is reached, the equation $L^1V^1=f^1$ can be solved directly and $v^2 \leftarrow v^2 +I_{1}^{2}V^1$ performed. Then one does $IU$ relaxation sweeps on grid $G^{k-1}$ before forming $v^k \leftarrow v^k +I_{k-1}^{k}V^{k-1},$ \ $3 \le k \le M$. (This description assumes $M \geq 3$, the cases $M=1$ or $2$ being trivial.) \lem{Lemma 2.2}We discuss first the choice for $I_{k-1}^k$ made in {\rm [BF]} and {\rm [S]}, which is a generalization of the choice made in {\rm [ABDP]}, {\rm [D1]}. We assume that $G^{k-1}$ is obtained from $G^k$ by standard coarsening; that is, if $G^k$ is a tensor product grid $G_{x}^k \times G_{y}^k \times G_{z}^k$, $G^{k-1}=G_{x}^{k-1} \times G_{y}^{k-1} \times G_{z}^{k-1}$, where $G_{x}^{k-1}$ is obtained by deleting every other grid point of $G_x^k$ and similarly for $G_{y}^k$ and $G_{z}^k$. We also assume that $L^k$ uses at most nearest and next-nearest neighbors (so that the difference template is at most a twenty-seven point one). For fine points corresponding to coarse points $I_{k-1}^{k}$ is just the identity. For a fine point $p$ on a coarse grid line between two coarse points, define ${(L^k)}^{C_1}$ at $p$ to be the difference template obtained by summing the difference template of $L^k$ at $p$ in the two directions perpendicular to the coarse grid line; ${\smash{(}L^k\smash{)}}^{C_1}$ is a one-dimensional operator with a three-point template; thus, the equation \endlem $${(L^{k})}^{C_1}v^k=0 \leqno(2.3\hbox{a})$$ gives $I^k_{k-1}$ at $p$ as a weighted average of data at the two coarse grid points. For a fine point $p$ not on a coarse grid line but in the same plane as two coarse grid lines, define ${(L^k)}^{C_2}$ at $p$ to be the difference template obtained by summing the difference template of $L^k$ at $p$ in the direction perpendicular to the plane containing the coarse grid lines; thus, the equation $${(L^k)}^{C_2}v^k=0\leqno(2.3\hbox{b})$$ gives $I^k_{k-1}$ at $p$ as a weighted average of data at the eight nearest neighbors. Four of these neighbors are fine grid points which are related via ${(L^k)}^{C_1}$ to the four nearest neighbor coarse grid points; thus $(2.3b)$ gives $I^k_{k-1}$ at $p$ as a weighted average of data at the four nearest neighbor grid points. The remaining fine grid points have twenty-seven nearest neighbors and next-nearest neighbors, which by use of the equation $$L^kv^k=0\leqno(2.3\hbox{c})$$ and ${(L^k)}^{C_1}$ and ${(L^k)}^{C_2}$ give $I^k_{k-1}$ at such fine grid points as a weighted average of the eight nearest neighbor coarse grid points. The coarse grid operators are defined by $$L^{k-1}={(I_{k-1}^k)}^{T}L^{k}I_{k-1}^k,\leqno(2.4)$$ and the restriction operator $I_k^{k-1}$ is ${(I_{k-1}^k)}^{T}$. Note that even if the fine grid difference operator $L^M$ is a seven-point operator (using only nearest neighbors), the coarse grid difference operators $L^k$, $k < M$, are twenty-seven point operators. Let $A^k$ be the matrix associated with $L^k$ and let $a^k_{ij}$ be the entries of $A^k$, with $a^k_{ii}>0$. In testing the above method of $[$BF$]$ for isotropic problems, we noticed very poor performance if $$0 < a^k_{ii}-\sum _{j\not= i}a^k_{ij}\ll a^k_{ii};\leqno(2.5)$$ this condition occurs if $\sigma$ is nearly zero. Such poor performance was also observed in $[$BF$]$. We also observed mild degradation in two-dimensional problems in the method of [ABDP], [D1] in this case. One cure is to define ${\tilde A}^k$ to be the matrix with entries ${\tilde a}^k_{ij}$, where $${\tilde a}^k_{ij}=a^k_{ij}\quad\hbox{if $i \not= j$};$$ $${}\leqno(2.6)$$ $${\tilde a}^k_{ii}=\cases{\sum\limits _{j\not= i}a^k_{ij}, &if $a^k_{ii}-\sum\limits_{j\not= i}a^k_{ij}\ll a^k_{ii}$,\cr a^k_{ii}& otherwise.\cr}$$ Let ${\tilde L}^k$ be the associated difference operator, and define $I^k_{k-1}$ as above using ${\tilde L}^k$ instead of $L^k$. For reference we define $$ L^{k-1}={(I_{k-1}^k)}^{T}L^{k}I_{k-1}^k,\ I_{k-1}^k \hbox{ defined by $(2.3)$ using ${\tilde L}^k$ instead of $L^k$.} \leqno(2.7)$$ The computation of $(2.4)$ or $(2.7)$ is very involved. If $L^k$ is defined on a grid with $n$ points, it requires $116n$ $[$309$n]$ floating point operations if $L^k$ is a seven \Refs \ref 1\\{\smc C. Bandle,} {\it Isoperimetric inetualities and Applications}, Pitman, Boston, 1980.\endref \ref 2\\{\smc D. Colton,} {\it Analytic Theory of Partial Differential Equations}, Pitman, Boston, 1980.\endref \ref 3\\\sameauthor, {\it Far field patterns for the impedance boundary value problem in acoustic scattering}, Applic. Anal., 16 (1983), pp. 131-139.\endref \end