Floating point precision
IncompressibleNavierStokes generates efficient code for different floating point precisions, such as
Double precision (
Float64)Single precision (
Float32)Half precision (
Float16)
To use single or half precision, all user input floats should be converted to the desired type. Mixing different precisions causes unnecessary conversions and may break the code.
GPU precision
For GPUs, single precision is preferred. CUDA.jls cu converts to single precision.
Pressure solvers
SparseArrays.jls sparse matrix factorizations only support double precision. psolver_direct only works for Float64. Consider using an iterative solver such as psolver_cg when using single or half precision.