Interested in:
CFD
DEM
High Performance Computing
Programming - C/C++, Fortran, MPI, Python
github.com/nimbix/jarvi...
github.com/nimbix/jarvi...
Looks like the installation size is ~130 GB.
Looks like the podman build was just scaring me and double counting the installation.
Looks like the installation size is ~130 GB.
Looks like the podman build was just scaring me and double counting the installation.
Looking like it will be ~230GB in size.
Looking like it will be ~230GB in size.
No idea if you can get physical disks.
No idea if you can get physical disks.
How does load balancing and ghost cell/atoms work?
How does load balancing and ghost cell/atoms work?
Also, PiHole FTW.
Also, PiHole FTW.
But the benchmark uses a spack recipe to create the executable. And a bash/python script is used to create the dat file.
I stick to creating as square of a parameter space as possible and not just maxing the flops as I care more about relative performance between clusters.
But the benchmark uses a spack recipe to create the executable. And a bash/python script is used to create the dat file.
I stick to creating as square of a parameter space as possible and not just maxing the flops as I care more about relative performance between clusters.
I do have a hpl benchmark that has to be portable for all of our clusters.
I do have a hpl benchmark that has to be portable for all of our clusters.