Neural closure models

`NeuralClorusure`

These features are experimental, and require cloning IncompressibleNavierStokes from GitHub:

git clone https://github.com/agdestein/IncompressibleNavierStokes.jl
cd IncompressibleNavierStokes/lib/NeuralClosure

For large eddy simulation (LES), a closure model is required. With IncompressibleNavierStokes, a neural closure model can be trained on filtered DNS data. The discrete DNS equations are given by

\[\begin{split} M u & = 0, \\ \frac{\mathrm{d} u}{\mathrm{d} t} & = F(u) - G p. \end{split}\]

Applying a spatial filter $\Phi$, the extracted large scale components $\bar{u} = \Phi u$ are governed by the equation

\[\begin{split} M \bar{u} & = 0, \\ \frac{\mathrm{d} \bar{u}}{\mathrm{d} t} & = F(\bar{u}) + c - G \bar{p}, \end{split}\]

where the discretizations $M$, $F$, and $G$ are adapted to the size of their inputs and $c = \overline{F(u)} - F(\bar{u})$ is a commutator error. We here assumed that $M$ and $\Phi$ commute, which is the case for face averaging filters. Replacing $c$ with a parameterized closure model $m(\bar{u}, \theta) \approx c$ gives the LES equations for the approximate large scale velocity $\bar{v} \approx \bar{u}$

\[\begin{split} M \bar{v} & = 0, \\ \frac{\mathrm{d} \bar{v}}{\mathrm{d} t} & = F(\bar{v}) + m(\bar{v}, \theta) - G \bar{q}. \end{split}\]

NeuralClosure module

IncompressibleNavierStokes provides a NeuralClosure module.

The following filters are available:

The following functions create data.

NeuralClosure.create_les_dataFunction
create_les_data(
    D = 2,
    Re = 2e3,
    lims = ntuple(α -> (typeof(Re)(0), typeof(Re)(1)), D),
    nles = [ntuple(α -> 64, D)],
    ndns = ntuple(α -> 256, D),
    filters = (FaceAverage(),),
    tburn = typeof(Re)(0.1),
    tsim = typeof(Re)(0.1),
    Δt = typeof(Re)(1e-4),
    create_psolver = psolver_spectral,
    savefreq = 1,
    ArrayType = Array,
    icfunc = (setup, psolver) -> random_field(setup, typeof(Re)(0); psolver),
    rng,
    kwargs...,
)

Create filtered DNS data.

source

Training

To improve the model parameters, we exploit exact filtered DNS data $\bar{u}$ and exact commutator errors $c$ obtained through DNS. The model is trained by minimizing the a priori loss function

\[L^\text{prior}(\theta) = \| m(\bar{u}, \theta) - c \|^2,\]

or the a posteriori loss function

\[L^\text{post}(\theta) = \| \bar{v}_\theta - \bar{u} \|^2,\]

where $\bar{v}_\theta$ is the solution to the LES equation for the given parameters $\theta$. The prior loss is easy to evaluate and easy to differentiate, as it does not involve solving the ODE. However, minimizing $L^\text{prior}$ does not take into account the effect of the prediction error on the LES solution error. The posterior loss does, but has a longer computational chain involving solving the LES ODE.

NeuralClosure.create_dataloader_priorFunction
createdataloader(data; nuse = 50, device = identity, rng)

Create dataloader that uses a batch of batchsize random samples from data at each evaluation. The batch is moved to device.

source
NeuralClosure.trainFunction
train(
    dataloaders,
    loss,
    opt,
    θ;
    niter = 100,
    ncallback = 1,
    callback = (i, θ) -> println("Iteration $i of $niter"),
)

Update parameters θ to minimize loss(dataloader(), θ) using the optimiser opt for niter iterations.

Return the a new named tuple (; opt, θ, callbackstate) with updated state and parameters.

source
NeuralClosure.mean_squared_errorFunction
mean_squared_error(f, x, y, θ; normalize = y -> sum(abs2, y), λ = sqrt(eps(eltype(x))))

Compute MSE between f(x, θ) and y.

The MSE is further divided by normalize(y).

source
NeuralClosure.create_loss_postFunction
create_loss_post(;
    setup,
    method = RKMethods.RK44(; T = eltype(setup.grid.x[1])),
    psolver,
    closure,
    nupdate = 1,
    projectorder = :last,
)

Create a-posteriori loss function.

source
NeuralClosure.create_relerr_postFunction
create_relerr_post(;
    data,
    setup,
    method = RKMethods.RK44(; T = eltype(setup.grid.x[1])),
    psolver,
    closure_model,
    nupdate = 1,
)

Create a-posteriori relative error.

source
NeuralClosure.create_callbackFunction
create_callback(
    f,
    x,
    y;
    state = Point2f[],
    display_each_iteration = false,
)

Create convergence plot for relative error between f(x, θ) and y. At each callback, plot is updated and current error is printed.

If state is nonempty, it also plots previous convergence.

If not using interactive GLMakie window, set display_each_iteration to true.

source

Neural architectures

We provide two neural architectures: A convolutional neural network (CNN) and a Fourier neural operator (FNO).

NeuralClosure.cnnFunction
cnn(;
    setup,
    radii,
    channels,
    activations,
    use_bias,
    channel_augmenter = identity,
    rng = Random.default_rng(),
)

Create CNN closure model. Return a tuple (closure, θ) where θ are the initial parameters and closure(u, θ) predicts the commutator error.

source
NeuralClosure.fnoFunction
fno(; setup, kmax, c, σ, ψ, rng = Random.default_rng(), kwargs...)

Create FNO closure model. Return a tuple (closure, θ) where θ are the initial parameters and closure(V, θ) predicts the commutator error.

source
NeuralClosure.FourierLayerType
FourierLayer(dimension, kmax, cin => cout; σ = identity, init_weight = glorot_uniform)

Fourier layer operating on uniformly discretized functions.

Some important sizes:

  • dimension: Spatial dimension, e.g. Dimension(2) or Dimension(3).
  • (nx..., cin, nsample): Input size
  • (nx..., cout, nsample): Output size
  • nx = fill(n, dimension()): Number of points in each spatial dimension
  • n ≥ kmax: Same number of points in each spatial dimension, must be larger than cut-off wavenumber
  • kmax: Cut-off wavenumber
  • nsample: Number of input samples (treated independently)
source