Neural closure models
NeuralClorusure
These features are experimental, and require cloning IncompressibleNavierStokes from GitHub:
git clone https://github.com/agdestein/IncompressibleNavierStokes.jl
cd IncompressibleNavierStokes/lib/NeuralClosureLarge eddy simulation, a closure model is required. With IncompressibleNavierStokes, a neural closure model can be trained on filtered DNS data. The discrete DNS equations are given by
Applying a spatial filter
where the discretizations
NeuralClosure module
IncompressibleNavierStokes provides a NeuralClosure module.
collocate(u) -> AnyInterpolate velocity components to volume centers.
create_closure(layers...; rng)Create neural closure model from layers.
create_tensorclosure(layers...; setup, rng)Create tensor basis closure.
decollocate(u) -> AnyInterpolate closure force from volume centers to volume faces.
wrappedclosure(
m,
setup
) -> NeuralClosure.var"#neuralclosure#1"Wrap closure model and parameters so that it can be used in the solver.
Filters
The following filters are available:
abstract type AbstractFilterDiscrete DNS filter.
Subtypes ConcreteFilter should implement the in-place method:
(::ConcreteFilter)(v, u, setup_les, compression)which filters the DNS field u and put result in LES field v. Then the out-of place method:
(::ConcreteFilter)(u, setup_les, compression)automatically becomes available.
Fields
struct FaceAverage <: NeuralClosure.AbstractFilterAverage fine grid velocity field over coarse volume face.
Fields
struct VolumeAverage <: NeuralClosure.AbstractFilterAverage fine grid velocity field over coarse volume.
Fields
reconstruct!(u, v, setup_dns, setup_les, comp) -> AnyReconstruct DNS velocity u from LES velocity v.
reconstruct(v, setup_dns, setup_les, comp) -> AnyReconstruct DNS velocity field. See also reconstruct!.
Training
To improve the model parameters, we exploit exact filtered DNS data
or the a posteriori loss function
where
create_callback(
err;
θ,
callbackstate,
displayref,
display_each_iteration,
displayfig,
filename
)Create convergence plot for relative error between f(x, θ) and y. At each callback, plot is updated and current error is printed.
If state is nonempty, it also plots previous convergence.
If not using interactive GLMakie window, set display_each_iteration to true.
create_dataloader_post(trajectories; nunroll, device, rng)Create trajectory dataloader.
create_dataloader_prior(data; batchsize, device, rng)Create dataloader that uses a batch of batchsize random samples from data at each evaluation. The batch is moved to device.
create_loss_post(
;
setup,
method,
psolver,
closure,
nupdate,
projectorder
)Create a-posteriori loss function.
create_loss_prior(loss, f) -> NeuralClosure.var"#56#57"Wrap loss function loss(batch, θ).
The function loss should take inputs like loss(f, x, y, θ).
create_relerr_post(
;
data,
setup,
method,
psolver,
closure_model,
nupdate,
projectorder
)Create a-posteriori relative error.
create_relerr_prior(f, x, y) -> NeuralClosure.var"#58#59"Create a-priori error.
create_relerr_symmetry_post(
;
u,
setup,
method,
psolver,
Δt,
nstep,
g
)Create a-posteriori symmetry error.
create_relerr_symmetry_prior(; u, setup, g)Create a-priori equivariance error.
mean_squared_error(f, x, y, θ; normalize, λ) -> AnyCompute MSE between f(x, θ) and y.
The MSE is further divided by normalize(y).
train(
dataloaders,
loss,
optstate,
θ;
niter,
ncallback,
callback,
callbackstate
) -> NamedTuple{(:optstate, :θ, :callbackstate), <:Tuple{Any, Any, Nothing}}Update parameters θ to minimize loss(dataloader(), θ) using the optimiser opt for niter iterations.
Return the a new named tuple (; opt, θ, callbackstate) with updated state and parameters.
Neural architectures
We provide neural architectures: A convolutional neural network (CNN), group convolutional neural networks (G-CNN) and a Fourier neural operator (FNO).
cnn(
;
setup,
radii,
channels,
activations,
use_bias,
channel_augmenter,
rng
)Create CNN closure model. Return a tuple (closure, θ) where θ are the initial parameters and closure(u, θ) predicts the commutator error.
struct GroupConv2D{C} <: LuxCore.AbstractLuxLayerGroup-equivariant convolutional layer – with respect to the p4 group. The layer is equivariant to rotations and translations of the input vector field.
The kwargs are passed to the Conv layer.
The layer has three variants:
If
isliftingthen it lifts a vector input(u1, u2)into a rotation-state vector(v1, v2, v3, v4).If
isprojecting, it projects a rotation-state vector(u1, u2, u3, v4)into a vector(v1, v2).Otherwise, it cyclically transforms the rotation-state vector
(u1, u2, u3, u4)into a new rotation-state vector(v1, v2, v3, v4).
Fields
isliftingisprojectingcincoutconv
gcnn(; setup, radii, channels, activations, use_bias, rng)Create CNN closure model. Return a tuple (closure, θ) where θ are the initial parameters and closure(u, θ) predicts the commutator error.
rot2(u, r)Rotate the field u by 90 degrees counter-clockwise r - 1 times.
rot2stag(u, g) -> Tuple{Any, Any}Rotate staggered grid velocity field. See also rot2.
struct FourierLayer{D, A, F} <: LuxCore.AbstractLuxLayerFourier layer operating on uniformly discretized functions.
Some important sizes:
dimension: Spatial dimension, e.g.Dimension(2)orDimension(3).(nx..., cin, nsample): Input size(nx..., cout, nsample): Output sizenx = fill(n, dimension()): Number of points in each spatial dimensionn ≥ kmax: Same number of points in each spatial dimension, must be larger than cut-off wavenumberkmax: Cut-off wavenumbernsample: Number of input samples (treated independently)
Fields
dimensionkmaxcincoutσinit_weight
fno(; setup, kmax, c, σ, ψ, rng, kwargs...)Create FNO closure model. Return a tuple (closure, θ) where θ are the initial parameters and closure(V, θ) predicts the commutator error.
Data generation
create_io_arrays(data, setups) -> AnyCreate
create_les_data(
;
D,
Re,
lims,
nles,
ndns,
filters,
tburn,
tsim,
Δt,
create_psolver,
savefreq,
ArrayType,
icfunc,
rng,
kwargs...
)Create filtered DNS data.
filtersaver(
dns,
les,
filters,
compression,
psolver_dns,
psolver_les;
nupdate
) -> NamedTuple{(:initialize, :finalize), <:Tuple{NeuralClosure.var"#102#106"{Int64}, NeuralClosure.var"#105#109"}}Save filtered DNS data.