Skip to content

Neural closure models

NeuralClorusure

These features are experimental, and require cloning IncompressibleNavierStokes from GitHub:

sh
git clone https://github.com/agdestein/IncompressibleNavierStokes.jl
cd IncompressibleNavierStokes/lib/NeuralClosure

Large eddy simulation, a closure model is required. With IncompressibleNavierStokes, a neural closure model can be trained on filtered DNS data. The discrete DNS equations are given by

Mu=0,dudt=F(u)Gp.

Applying a spatial filter Φ, the extracted large scale components u¯=Φu are governed by the equation

Mu¯=0,du¯dt=F(u¯)+cGp¯,

where the discretizations M, F, and G are adapted to the size of their inputs and c=F(u)F(u¯) is a commutator error. We here assumed that M and Φ commute, which is the case for face averaging filters. Replacing c with a parameterized closure model m(u¯,θ)c gives the LES equations for the approximate large scale velocity v¯u¯

Mv¯=0,dv¯dt=F(v¯)+m(v¯,θ)Gq¯.

NeuralClosure module

IncompressibleNavierStokes provides a NeuralClosure module.

# NeuralClosure.NeuralClosureModule.

Neural closure modelling tools.

source


# NeuralClosure.collocateMethod.
julia
collocate(u) -> Any

Interpolate velocity components to volume centers.

source


# NeuralClosure.create_closureMethod.
julia
create_closure(layers...; rng)

Create neural closure model from layers.

source


# NeuralClosure.create_tensorclosureMethod.
julia
create_tensorclosure(layers...; setup, rng)

Create tensor basis closure.

source


# NeuralClosure.decollocateMethod.
julia
decollocate(u) -> Any

Interpolate closure force from volume centers to volume faces.

source


# NeuralClosure.wrappedclosureMethod.
julia
wrappedclosure(
    m,
    setup
) -> NeuralClosure.var"#neuralclosure#1"

Wrap closure model and parameters so that it can be used in the solver.

source


Filters

The following filters are available:

# NeuralClosure.AbstractFilterType.
julia
abstract type AbstractFilter

Discrete DNS filter.

Subtypes ConcreteFilter should implement the in-place method:

(::ConcreteFilter)(v, u, setup_les, compression)

which filters the DNS field u and put result in LES field v. Then the out-of place method:

(::ConcreteFilter)(u, setup_les, compression)

automatically becomes available.

Fields

source


# NeuralClosure.FaceAverageType.
julia
struct FaceAverage <: NeuralClosure.AbstractFilter

Average fine grid velocity field over coarse volume face.

Fields

source


# NeuralClosure.VolumeAverageType.
julia
struct VolumeAverage <: NeuralClosure.AbstractFilter

Average fine grid velocity field over coarse volume.

Fields

source


# NeuralClosure.reconstruct!Method.
julia
reconstruct!(u, v, setup_dns, setup_les, comp) -> Any

Reconstruct DNS velocity u from LES velocity v.

source


# NeuralClosure.reconstructMethod.
julia
reconstruct(v, setup_dns, setup_les, comp) -> Any

Reconstruct DNS velocity field. See also reconstruct!.

source


Training

To improve the model parameters, we exploit exact filtered DNS data u¯ and exact commutator errors c obtained through DNS. The model is trained by minimizing the a priori loss function

Lprior(θ)=m(u¯,θ)c2,

or the a posteriori loss function

Lpost(θ)=v¯θu¯2,

where v¯θ is the solution to the LES equation for the given parameters θ. The prior loss is easy to evaluate and easy to differentiate, as it does not involve solving the ODE. However, minimizing Lprior does not take into account the effect of the prediction error on the LES solution error. The posterior loss does, but has a longer computational chain involving solving the LES ODE.

# NeuralClosure.create_callbackMethod.
julia
create_callback(
    err;
    θ,
    callbackstate,
    displayref,
    display_each_iteration,
    displayfig,
    filename
)

Create convergence plot for relative error between f(x, θ) and y. At each callback, plot is updated and current error is printed.

If state is nonempty, it also plots previous convergence.

If not using interactive GLMakie window, set display_each_iteration to true.

source


# NeuralClosure.create_dataloader_postMethod.
julia
create_dataloader_post(trajectories; nunroll, device, rng)

Create trajectory dataloader.

source


# NeuralClosure.create_dataloader_priorMethod.
julia
create_dataloader_prior(data; batchsize, device, rng)

Create dataloader that uses a batch of batchsize random samples from data at each evaluation. The batch is moved to device.

source


# NeuralClosure.create_loss_postMethod.
julia
create_loss_post(
;
    setup,
    method,
    psolver,
    closure,
    nupdate,
    projectorder
)

Create a-posteriori loss function.

source


# NeuralClosure.create_loss_priorMethod.
julia
create_loss_prior(loss, f) -> NeuralClosure.var"#56#57"

Wrap loss function loss(batch, θ).

The function loss should take inputs like loss(f, x, y, θ).

source


# NeuralClosure.create_relerr_postMethod.
julia
create_relerr_post(
;
    data,
    setup,
    method,
    psolver,
    closure_model,
    nupdate,
    projectorder
)

Create a-posteriori relative error.

source


# NeuralClosure.create_relerr_priorMethod.
julia
create_relerr_prior(f, x, y) -> NeuralClosure.var"#58#59"

Create a-priori error.

source


# NeuralClosure.create_relerr_symmetry_postMethod.
julia
create_relerr_symmetry_post(
;
    u,
    setup,
    method,
    psolver,
    Δt,
    nstep,
    g
)

Create a-posteriori symmetry error.

source


# NeuralClosure.create_relerr_symmetry_priorMethod.
julia
create_relerr_symmetry_prior(; u, setup, g)

Create a-priori equivariance error.

source


# NeuralClosure.mean_squared_errorMethod.
julia
mean_squared_error(f, x, y, θ; normalize, λ) -> Any

Compute MSE between f(x, θ) and y.

The MSE is further divided by normalize(y).

source


# NeuralClosure.trainMethod.
julia
train(
    dataloaders,
    loss,
    optstate,
    θ;
    niter,
    ncallback,
    callback,
    callbackstate
) -> NamedTuple{(:optstate, , :callbackstate), <:Tuple{Any, Any, Nothing}}

Update parameters θ to minimize loss(dataloader(), θ) using the optimiser opt for niter iterations.

Return the a new named tuple (; opt, θ, callbackstate) with updated state and parameters.

source


Neural architectures

We provide neural architectures: A convolutional neural network (CNN), group convolutional neural networks (G-CNN) and a Fourier neural operator (FNO).

# NeuralClosure.cnnMethod.
julia
cnn(
;
    setup,
    radii,
    channels,
    activations,
    use_bias,
    channel_augmenter,
    rng
)

Create CNN closure model. Return a tuple (closure, θ) where θ are the initial parameters and closure(u, θ) predicts the commutator error.

source


# NeuralClosure.GroupConv2DType.
julia
struct GroupConv2D{C} <: LuxCore.AbstractLuxLayer

Group-equivariant convolutional layer – with respect to the p4 group. The layer is equivariant to rotations and translations of the input vector field.

The kwargs are passed to the Conv layer.

The layer has three variants:

  • If islifting then it lifts a vector input (u1, u2) into a rotation-state vector (v1, v2, v3, v4).

  • If isprojecting, it projects a rotation-state vector (u1, u2, u3, v4) into a vector (v1, v2).

  • Otherwise, it cyclically transforms the rotation-state vector (u1, u2, u3, u4) into a new rotation-state vector (v1, v2, v3, v4).

Fields

  • islifting

  • isprojecting

  • cin

  • cout

  • conv

source


# NeuralClosure.gcnnMethod.
julia
gcnn(; setup, radii, channels, activations, use_bias, rng)

Create CNN closure model. Return a tuple (closure, θ) where θ are the initial parameters and closure(u, θ) predicts the commutator error.

source


# NeuralClosure.rot2Function.
julia
rot2(u, r)

Rotate the field u by 90 degrees counter-clockwise r - 1 times.

source


# NeuralClosure.rot2stagMethod.
julia
rot2stag(u, g) -> Tuple{Any, Any}

Rotate staggered grid velocity field. See also rot2.

source


# NeuralClosure.FourierLayerType.
julia
struct FourierLayer{D, A, F} <: LuxCore.AbstractLuxLayer

Fourier layer operating on uniformly discretized functions.

Some important sizes:

  • dimension: Spatial dimension, e.g. Dimension(2) or Dimension(3).

  • (nx..., cin, nsample): Input size

  • (nx..., cout, nsample): Output size

  • nx = fill(n, dimension()): Number of points in each spatial dimension

  • n ≥ kmax: Same number of points in each spatial dimension, must be larger than cut-off wavenumber

  • kmax: Cut-off wavenumber

  • nsample: Number of input samples (treated independently)

Fields

  • dimension

  • kmax

  • cin

  • cout

  • σ

  • init_weight

source


# NeuralClosure.fnoMethod.
julia
fno(; setup, kmax, c, σ, ψ, rng, kwargs...)

Create FNO closure model. Return a tuple (closure, θ) where θ are the initial parameters and closure(V, θ) predicts the commutator error.

source


Data generation

# NeuralClosure.create_io_arraysMethod.
julia
create_io_arrays(data, setups) -> Any

Create (u¯,c) pairs for training.

source


# NeuralClosure.create_les_dataMethod.
julia
create_les_data(
;
    D,
    Re,
    lims,
    nles,
    ndns,
    filters,
    tburn,
    tsim,
    Δt,
    create_psolver,
    savefreq,
    ArrayType,
    icfunc,
    rng,
    kwargs...
)

Create filtered DNS data.

source


# NeuralClosure.filtersaverMethod.
julia
filtersaver(
    dns,
    les,
    filters,
    compression,
    psolver_dns,
    psolver_les;
    nupdate
) -> NamedTuple{(:initialize, :finalize), <:Tuple{NeuralClosure.var"#102#106"{Int64}, NeuralClosure.var"#105#109"}}

Save filtered DNS data.

source