Chaste can be used to set up solvers for more general (coupled) PDEs. To do this the
user just needs to code up the integrands of any finite element (FE) matrices or vectors,
without having to deal with set-up, looping over elements, numerical quadrature, assembly
or solving the linear system. If you have a set of coupled linear PDEs for which it is
appropriate to use linear basis functions for each unknown (for example, a reaction-diffusion
system), then it is relatively straightforward to set up a solver that will be parallel and
reliable, since all the base components are heavily tested.
Some solvers for general simple (uncoupled) linear PDEs are already provided in Chaste, such
as the SimpleLinearEllipticSolver. These are for PDEs that can be written in a generic
form (SimpleLinearEllipticPde, for example). However ‘‘coupled’’ PDEs can’t
be easily written in generic form, so the user has to write their own solver. In this tutorial
we explain how to do this.
For this tutorial the user needs to have read the solving-PDEs tutorials. It may also be
helpful to read the associated lectures notes,
in particular the slides on solving equations using finite elements if you are not familiar
with this (lecture 2), the slides on the general design of the Chaste finite element solvers
(lecture 3), and the first part of lecture 4.
Let us use the terminology “assembled in an FE manner” for any matrix or vector that is
defined via a volume/surface/line integral, and which is constructed by: looping over
elements (or surface elements, etc); computing the elemental contribution (i.e. a small
matrix/vector) using numerical quadrature; and adding to the full matrix/vector.
We only consider linear problems here. In these problems the discretised FE problem leads
to a linear system, Ax=b, to be solved once in static problems and at each time step in
time-dependent problems. There are two cases to be distinguished. The first case is where
BOTH A and b are assembled in an FE manner, b possibly being composed of a volume integral
plus a surface integral. The other case is where this is not true, for example where b = Mc+d,
where the vector d and matrix M are assembled in an FE manner, but not the vector c.
The Chaste PDE classes include ASSEMBLER classes, for setting up anything assembled in an
FE manner, and SOLVER classes, for setting up linear systems. In the general case, solvers
need to own assemblers for setting up each part of the linear system. However for the first
case described above (in which both A and b in Ax=b are assembled), we can use the design
where the solver IS AN assembler. We illustrate how to do this in the first tutorial.
Let us write a solver for the coupled 2-unknown problem
$$
\begin{align*}
\nabla^2 u + v &= f(x,y),\\\
\nabla^2 u + u &= g(x,y),
\end{align*}
$$
where $f$ and $g$ are chosen so that,
with zero-dirichlet boundary conditions, the solution is given by $u = \sin(\pi x)\sin(\pi x)$,
$v = \sin(2\pi x)\sin(2\pi x)$.
(As a brief aside, note that the solver we write will in fact work with general Dirichlet-Neumann
boundary conditions, though the test will only provide all-Dirichlet boundary conditions. We
save a discussion on general Dirichlet-Neumann boundary conditions for the second example.)
Using linear basis functions, and a mesh with N nodes, the linear system that needs to be set up is
of size 2N by 2N, and in block form is:
where K is the stiffness matrix, M the mass matrix, U the vector of nodal values
of u, V the vector of nodal values of v, b1 the vector with entries $\int f\phi_i dV$ ($i=1,\dots,N$)
and b2 has entries $\int g\phi_i dV$ (here $\phi_i$ are the linear basis functions).
This is the linear system which we now write a solver to set up.
Note
The main Chaste solvers assume a STRIPED data format, i.e. that the unknown vector
is:
We write down equations in block form as it makes things clearer, but have to remember that the code
deals with STRIPED data structures. Striping is used in the code for parallelisation reasons.
These are some basic includes as in the solving-PDEs tutorials
We need to include the following two classes if we are going to use a combination of
(element_dim, space_dim, problem_dim) that isn’t explicitly instantiated in
BoundaryConditionsContainer.cpp (see the bottom of that file) (without these includes this test will
fail to link).
These two classes will be used in writing the solver
We will solve a second problem, below, which will be time-dependent and will
require the following class
The linear system is Ax=b where A and b are FE assembled, so we can use the solver-is-an-assembler
design. To construct our solver, we inherit from AbstractAssemblerSolverHybrid which links
the solver clases to the assembler classes, and AbstractStaticLinearPdeSolver. (For time-dependent
problems, the second parent would be AbstractDynamicLinearPdeSolver). Note the template
parameter PROBLEM_DIM below, in this case it is 2 as there are two unknowns.
The function f
The function g
The abstract assembler parent classes know how to assemble matrices and vectors, but the concrete
class needs to provide the integrand of the elemental contribution to A and b. This first
method returns the elemental contribution of the matrix A, given the provided bases
(rPhi, rGradPhi). The ‘3’s here represent the number of bases per element (ie the number
of nodes as linear bases are being used). The returned matrix is 6 by 6 (problem_dim * num_bases_per_element = 2*3 = 6).
Set up the matrix, which corresponds to the elemental contribution for the matrix
written above, taking into account the striped nature of the matrices and vectors.
(Note: the following can be done more efficiently using matrix slices and products,
see BidomainAssembler for example).
Similarly compute the elemental contribution to the RHS vector
These classes which inherit from both assemblers and solvers must
provide the following method, which links the two. Just copy and paste
the following.
The constructor takes in a mesh and boundary conditions container, and passes
them to the parent classes.
That is the solver written. The usage is the same as see the PDE solvers described in the
previous tutorials - have a look at the first test below.
Let us also write a solver for the following problem, which is composed of 3 parabolic PDEs
$$
\begin{align*}
u_t &= \nabla^2 u + v,\\\
v_t &= \nabla^2 v + u + 2w,\\\
w_t &= \nabla^2 w + g(t,x,y),
\end{align*}
$$
where $g(t,x,y) = t$ if $x>0.5$ and $0$ otherwise. This time we assume general
Dirichlet-Neumann boundary conditions will be specified.
The AbstractAssemblerSolverHybrid deals with the Dirichlet and Neumann boundary parts of the implementation,
so, we, the writer of the solver, don’t have to worry about this. The user has to realise though that they are
specifying NATURAL Neumann BCs, which are whatever appears naturally in the weak form of the problem. In this case, natural
Neumann BCs are specification of: $\partial u/\partial n = s_1, \partial v/\partial n = s_2, \partial w/ \partial n = s_3$,
which coincide with usual Neumann BCs. However, suppose the last equation was $w_t = \nabla^2 w + \nabla . (D \nabla u)$,
then the natural BCs would be:
$\partial u/\partial n = s_1, \partial v/\partial n = s_2, \partial w/\partial n + (D\nabla u).n = s_3$.
We need to choose a time-discretisation. Let us choose an implicit discretisation, ie
Using linear basis functions, and a mesh with N nodes, the linear system that needs to be set up is
of size 3N by 3N, and in block form is:
where K is the stiffness matrix, M the mass matrix, U^n the vector of nodal values
of u at time t_n, etc, b1 has entries integral( (u^n/dt)phi_i dV ), and similarly for
b2 and b3. Writing the Neumann boundary conditions for
u as du/dn = s1(x,y) on Gamma, a subset of the boundary, then c1 has entries
integral_over_Gamma (s1*phi_i dS), and similarly for c2 and c3.
Let us create a solver for this linear system, which will be written in a way in which the RHS
vector is assembled in an FE manner, so that the solver-is-an-assembler design can be used.
Note that this solver inherits from AbstractDynamicLinearPdeSolver and PROBLEM_DIM is now equal
to 3. We don’t have to worry about setting up [c1 c2 c3] (we just need to take in a BoundaryConditionsContainer
and the parent will use it in assembling this vector). We do however have to tell it
how to assemble the volume integral part of the RHS vector, and the LHS matrix.
Define the function g(t,x,y)
Provide the (elemental contribution to the) LHS matrix. The matrix is 9 by 9, where
9 = 3*3 = PROBLEM_DIM * NUM_NODES_PER_ELEMENT
Provide the volume elemental contribution to the RHS vector, ie the vector [b1 b2 b3] above
Define this method as before
The constructor is similar to before. However: important - by default the dynamic solvers
will reassemble the matrix each timestep. In this (and most other) problems the matrix is constant
and only needs to be assembled once. Make sure we tell the solver this, otherwise performance
will be destroyed.
Now the tests using the two solvers
Use the first solver to solve the static PDE. We apply zero Dirichlet boundary conditions
on the whole of the boundary for both variables.
The AbstractStaticLinearPdeSolver class from which our solver
inherits, provides a Solve method.
Compare against the exact solution.
Now run a test solving the parabolic-parabolic-parabolic PDE system.
Set up the boundary conditions. v and w are zero on the entire boundary,
and $\partial u/\partial n=1$ on the LHS and 0 otherwise.
Use our solver
The interface is exactly the same as the SimpleLinearParabolicSolver.
For this test we show how to output results to file for multiple sampling times. We start by
specifying an output directory and filename prefix for our results file:
When an output directory has been specified, the solver writes output in HDF5 format. To
convert this to another output format, we call the relevant method. Here, we convert
the output to plain txt files. We also say how often to write the data, telling the
solver to output results to file every tenth timestep. The solver will create
one file for each variable and for each time, so for example, the file
results_Variable_0_10 is the results for $u$, over all nodes, at the 11th printed time.
Have a look in the output directory after running the test. (Note: the HDF5 data can also be
converted to VTK or cmgui formats).
We are now ready to solve the system.
The plain txt output can be loaded into matlab for easy visualisation. For this we
also need the mesh data - at the very least the nodal locations - so we also write out
the mesh
Note that we need to destroy the initial condition vector as well as the solution.
Visualisation: To visualise in matlab/octave, you can load the node file,
and then the data files. However, the node file needs to be edited to remove any
comment lines (lines starting with #) and the header line (which says how
many nodes there are). (The little matlab script anim/matlab/read_chaste_node_file.m
can be used to do this, though it is not claimed to be very robust). As an
example matlab visualisation script, the following, if run from the output
directory, plots w: