In the previous tutorial we showed how a PDE solver could be written for the
‘simple’ case in which the FEM discretisation leads to a linear system Ax=b where
both A and b are ‘assembled’. In this tutorial, we consider the more general case,
and show to write assembler classes which assemble one particular matrix or vector,
and how to write solver classes which use assemblers to create and solve the FEM
linear system.
We will take as the test problem the heat equation, $u_t = u_{xx}$, with Dirichlet
BCs $u = u^*$ on $\Gamma_1$ and $\partial u/\partial n = g$ on $\Gamma_2$.
We write a solver which uses an explicit time-discretisation (as opposed to the implicit
discretisations used throughout the rest of the code). The FEM linear system that needs to be set up is
where M is the mass matrix, K the stiffness matrix, and U^{n} the vector of nodal
values of $u$ at timestep n. c is the surface integral term coming from the Neumann BCs,
i.e. $c_i = \int_{\Gamma_2} g \phi_i dS$. (This can be compared with an
implicit time-discretisation, for which we solve (M - dt K) U^{n+1} = M U^{n} + c).
Let us call M + dt*K the ‘RHS matrix’. We will write a solver, inheriting from
AbstractDynamicLinearPdeSolver, which is going to use three assemblers:
(i) an assembler of the mass matrix (already written);
(ii) an assembler of the RHS matrix (we have to write this ourselves); and
(iii) an assembler of surface term, c (already written).
Firstly, include AbstractFeVolumeIntegralAssembler which the assembler we write will inherit from,
and AbstractDynamicLinearPdeSolver, which the solver we write will inherit from.
We need to write an assembler for setting up the matrix M + dt K.
Any new assembler should inherit from AbstractFeVolumeIntegralAssembler, which deals with looping over
elements, looping over quadrature points, etc. Concrete classes need to provide the integrand for the matrix
or vector being assembled (exactly as in the previous tutorials). However, in general, the assembler
class can be used to assemble a matrix OR a vector OR both. The class we write here needs to assemble
a matrix but not a vector. Note that the parent class AbstractFeVolumeIntegralAssembler has two booleans
in the template list (as well as the dimension template parameters as normal) - these booleans say
whether this class will be assembling a vector or a matrix (or both).
Even when a class isn’t being written for a very general dimensions sometimes it is a good idea
to define the following, and then use ELEMENT_DIM etc in the below, as it can make the code a
bit easier to understand.
We are assembling a matrix, we means we need to provide a ComputeMatrixTerm() method, to return the
elemental contribution to the RHS matrix. Note that ELEMENT_DIM+1 is the number of
nodes in the element (=number of basis functions).
(If we were (also) assembling a vector, we would also have to provide a ComputeVectorTerm() method, which is
very similar).
Now write the constructor.
That’s the assembler written. The following solver class will show how to use it.
The parent class here is AbstractDynamicLinearPdeSolver, which contains a linear system
(this->mpLinearSystem), and will deal with allocating memory and solving the linear system.
The concrete class needs to implement a SetupLinearSystem() method which completely sets
up the linear system. In this case, it needs to set the LHS matrix in the linear system to
be M, and set the RHS vector to be rhs_matrix * current_soln.
The constuctor will take in a mesh and a BCC, the latter will be stored as a member variable
Declare a matrix for the RHS matrix
This is the main method which needs to be implemented. It takes in the current solution, and a
boolean saying whether the matrix (ie $A$ in $Ax=b$) is being computed or not.
This is how to use assemblers to set up matrices. We declare a mass matrix assembler,
pass it the LHS matrix of the linear system, and tell it to assemble. We also declare
one of our purpose-built RhsMatrixAssemblers, pass it the matrix mRhsMatrix, and
tell it to assemble.
Important note: if any of the assemblers will require the current solution (ie solution
at the current timestep), this needs to be passed to the assembler, as in the commented
line below.
Use the RHS matrix to set up the RHS vector, ie set b=(M+dtK)U^n
The third assembler we use is the NaturalNeumannSurfaceTermAssembler, which assembles
the vector c defined above, using the Neumann BCs stored in the BoundaryConditionsContainer
which is passed in in the constructor
Some necessary PETSc communication before applying Dirichet BCs
Apply the dirichlet BCs from the BCC to the linear system
Some necessary PETSc communication to finish
The constructor needs to call the parent constructor, save the BCC, ‘‘say that the (LHS) matrix is constant
in time’’ (so it is only computed once), and allocate memory for the RHS matrix.
Destructor
That’s all that needs to be written to write your own solver using the solver hierarchy
The following test uses the new solver. Since the interface is exactly the same as the
other solvers, except for not taking in a PDE (the fact that it solves a parameterless
heat equation is hardcoded into the solver), all of the below should be recognisable.
Note however the tiny timestep - this is needed for stability as this is an explicit scheme.
Also, to compare with the implicit solver, comment out the appropriate lines below. Note that
the implicit solver may seem quite slow in comparison - this is because the linear system is
much harder to solve (linear system is Ax=b, for explicit then A=M, for implicit A=M-dt*K, but
remember that the implicit solver can use much larger timesteps.
The interface is exactly the same as the SimpleLinearParabolicSolver.
We are now ready to solve the system. We check with TS_ASSERT_DELTA() that our new solver gets the same result as the old one.