Neural closure models
These features are experimental, and require cloning IncompressibleNavierStokes from GitHub:
git clone https://github.com/agdestein/IncompressibleNavierStokes.jl
cd IncompressibleNavierStokes/lib/NeuralClosure
For large eddy simulation (LES), a closure model is required. With IncompressibleNavierStokes, a neural closure model can be trained on filtered DNS data. The discrete DNS equations are given by
\[\begin{split} M u & = 0, \\ \frac{\mathrm{d} u}{\mathrm{d} t} & = F(u) - G p. \end{split}\]
Applying a spatial filter $\Phi$, the extracted large scale components $\bar{u} = \Phi u$ are governed by the equation
\[\begin{split} M \bar{u} & = 0, \\ \frac{\mathrm{d} \bar{u}}{\mathrm{d} t} & = F(\bar{u}) + c - G \bar{p}, \end{split}\]
where the discretizations $M$, $F$, and $G$ are adapted to the size of their inputs and $c = \overline{F(u)} - F(\bar{u})$ is a commutator error. We here assumed that $M$ and $\Phi$ commute, which is the case for face averaging filters. Replacing $c$ with a parameterized closure model $m(\bar{u}, \theta) \approx c$ gives the LES equations for the approximate large scale velocity $\bar{v} \approx \bar{u}$
\[\begin{split} M \bar{v} & = 0, \\ \frac{\mathrm{d} \bar{v}}{\mathrm{d} t} & = F(\bar{v}) + m(\bar{v}, \theta) - G \bar{q}. \end{split}\]
NeuralClosure module
IncompressibleNavierStokes provides a NeuralClosure module.
NeuralClosure.NeuralClosure
— ModuleNeural closure modelling tools.
The following filters are available:
NeuralClosure.FaceAverage
— Type(::FaceAverage)(v, u, setup_les)
Average fine grid u
over coarse volume face. Put result in v
.
NeuralClosure.VolumeAverage
— Type(::VolumeAverage)(v, u, setup_les, comp)
Average fine grid u
over coarse volume. Put result in v
.
NeuralClosure.reconstruct!
— Functionreconstruct!(u, v, setup_dns, setup_les)
Average fine grid u
over coarse volume face. Put result in v
.
The following functions create data.
NeuralClosure.filtersaver
— Functionfiltersaver(dns, les, filters, compression, psolver_dns, psolver_les; nupdate = 1)
Save filtered DNS data.
NeuralClosure.create_les_data
— Functioncreate_les_data(
D = 2,
Re = 2e3,
lims = ntuple(α -> (typeof(Re)(0), typeof(Re)(1)), D),
nles = [ntuple(α -> 64, D)],
ndns = ntuple(α -> 256, D),
filters = (FaceAverage(),),
tburn = typeof(Re)(0.1),
tsim = typeof(Re)(0.1),
Δt = typeof(Re)(1e-4),
create_psolver = psolver_spectral,
savefreq = 1,
ArrayType = Array,
icfunc = (setup, psolver) -> random_field(setup, typeof(Re)(0); psolver),
rng,
kwargs...,
)
Create filtered DNS data.
NeuralClosure.create_io_arrays
— Functioncreate_io_arrays(data, setups)
Create $(\bar{u}, c)$ pairs for training.
Training
To improve the model parameters, we exploit exact filtered DNS data $\bar{u}$ and exact commutator errors $c$ obtained through DNS. The model is trained by minimizing the a priori loss function
\[L^\text{prior}(\theta) = \| m(\bar{u}, \theta) - c \|^2,\]
or the a posteriori loss function
\[L^\text{post}(\theta) = \| \bar{v}_\theta - \bar{u} \|^2,\]
where $\bar{v}_\theta$ is the solution to the LES equation for the given parameters $\theta$. The prior loss is easy to evaluate and easy to differentiate, as it does not involve solving the ODE. However, minimizing $L^\text{prior}$ does not take into account the effect of the prediction error on the LES solution error. The posterior loss does, but has a longer computational chain involving solving the LES ODE.
NeuralClosure.create_dataloader_prior
— Functioncreatedataloader(data; nuse = 50, device = identity, rng)
Create dataloader that uses a batch of batchsize
random samples from data
at each evaluation. The batch is moved to device
.
NeuralClosure.create_dataloader_post
— Functioncreate_dataloader_post(trajectories; nunroll = 10, device = identity, rng)
Create trajectory dataloader.
NeuralClosure.train
— Functiontrain(
dataloaders,
loss,
opt,
θ;
niter = 100,
ncallback = 1,
callback = (i, θ) -> println("Iteration $i of $niter"),
)
Update parameters θ
to minimize loss(dataloader(), θ)
using the optimiser opt
for niter
iterations.
Return the a new named tuple (; opt, θ, callbackstate)
with updated state and parameters.
NeuralClosure.create_loss_prior
— Functioncreateloss_prior(loss, f)
Wrap loss function loss(batch, θ)
.
The function loss
should take inputs like loss(f, x, y, θ)
.
NeuralClosure.create_relerr_prior
— Functioncreate_relerr_prior(f, x, y)
Create a-priori error.
NeuralClosure.mean_squared_error
— Functionmean_squared_error(f, x, y, θ; normalize = y -> sum(abs2, y), λ = sqrt(eps(eltype(x))))
Compute MSE between f(x, θ)
and y
.
The MSE is further divided by normalize(y)
.
NeuralClosure.create_loss_post
— Functioncreate_loss_post(;
setup,
method = RKMethods.RK44(; T = eltype(setup.grid.x[1])),
psolver,
closure,
nupdate = 1,
projectorder = :last,
)
Create a-posteriori loss function.
NeuralClosure.create_relerr_post
— Functioncreate_relerr_post(;
data,
setup,
method = RKMethods.RK44(; T = eltype(setup.grid.x[1])),
psolver,
closure_model,
nupdate = 1,
)
Create a-posteriori relative error.
NeuralClosure.create_callback
— Functioncreate_callback(
f,
x,
y;
state = Point2f[],
display_each_iteration = false,
)
Create convergence plot for relative error between f(x, θ)
and y
. At each callback, plot is updated and current error is printed.
If state
is nonempty, it also plots previous convergence.
If not using interactive GLMakie window, set display_each_iteration
to true
.
Neural architectures
We provide two neural architectures: A convolutional neural network (CNN) and a Fourier neural operator (FNO).
NeuralClosure.wrappedclosure
— Functionwrappedclosure(m, setup)
Wrap closure model and parameters so that it can be used in the solver.
NeuralClosure.create_closure
— Functioncreate_closure(layers...; rng)
Create neural closure model from layers.
NeuralClosure.create_tensorclosure
— Functioncreate_tensorclosure(layers...; setup, rng)
Create tensor basis closure.
NeuralClosure.collocate
— Functioncollocate(u)
Interpolate velocity components to volume centers.
NeuralClosure.decollocate
— Functiondecollocate(u)
Interpolate closure force from volume centers to volume faces.
NeuralClosure.cnn
— Functioncnn(;
setup,
radii,
channels,
activations,
use_bias,
channel_augmenter = identity,
rng = Random.default_rng(),
)
Create CNN closure model. Return a tuple (closure, θ)
where θ
are the initial parameters and closure(u, θ)
predicts the commutator error.
NeuralClosure.fno
— Functionfno(; setup, kmax, c, σ, ψ, rng = Random.default_rng(), kwargs...)
Create FNO closure model. Return a tuple (closure, θ)
where θ
are the initial parameters and closure(V, θ)
predicts the commutator error.
NeuralClosure.FourierLayer
— TypeFourierLayer(dimension, kmax, cin => cout; σ = identity, init_weight = glorot_uniform)
Fourier layer operating on uniformly discretized functions.
Some important sizes:
dimension
: Spatial dimension, e.g.Dimension(2)
orDimension(3)
.(nx..., cin, nsample)
: Input size(nx..., cout, nsample)
: Output sizenx = fill(n, dimension())
: Number of points in each spatial dimensionn ≥ kmax
: Same number of points in each spatial dimension, must be larger than cut-off wavenumberkmax
: Cut-off wavenumbernsample
: Number of input samples (treated independently)