PERTURBO module

On the NERSC supercomputer, users can run PERTURBO by loading a module. We provide the modules for NERSC Cori Haswell, Cori KNL, and Perlmutter CPU.

Load the module that corresponds to the machine on which you are going to run the calculation:

# Common for all the machines on NERSC
module use /global/cfs/cdirs/m2626/perturbo/

# For Perlmutter CPU
module load perturbo-2.0.2-perlmutter-cpu

# For Cori Haswell
module load perturbo-2.0.2-hsw

# For Cori KNL
module load perturbo-2.0.2-knl

These commands could be added to your $HOME/.bashrc file to simplify the submission scripts.

The modules set up the perturbo.x and qe2pert.x executables. One can check that a module was loaded correctly by verifying the paths to executables (e.g. with the command which). For example, for the Perlmutter CPU module, the perturbo.x path is the following:

which perturbo.x
>>> /global/cfs/cdirs/m2626/perturbo/bin/2.0.2-perlmutter-cpu/perturbo.x

PERTURBO Slurm scripts

Here, we provide the optimal MPI and OpenMP setting for PERTURBO on the NERSC supercomputer as well as the examples of submission scripts.

Perlmutter CPU

We recommend to use 8 MPI tasks per node and 32 OpenMP threads per MPI task for Perlmutter CPU nodes (we are taking advantage of the hyperthreading). Here is a typical submission script (in this example, we use 2 Perlmutter CPU nodes):

#!/bin/bash
#SBATCH --account m1234
#SBATCH -N 2
#SBATCH -C cpu
#SBATCH -q regular
#SBATCH -J perturbo
#SBATCH -t 01:00:00

# Load Perturbo module
module use /global/cfs/cdirs/m2626/perturbo/
module load perturbo-2.0.2-perlmutter-cpu

#OpenMP settings:
export OMP_NUM_THREADS=32
export OMP_PLACES=threads
export OMP_PROC_BIND=spread

# Run perturbo.x (for qe2pert, replace perturbo.x with qe2pert.x)
srun -n 16 -c 32 --cpu_bind=cores perturbo.x -npools 16 -i pert.in > pert.out

Cori Haswell

We recommend to use 8 MPI tasks per node and 8 OpenMP threads per MPI task for Cori Haswell (we are taking advantage of the hyperthreading). Here is a typical submission script (in this example, we use 4 Cori Haswell nodes):

#!/bin/bash
#SBATCH --account m1234
#SBATCH -N 4
#SBATCH -C haswell
#SBATCH -J perturbo
#SBATCH -t 01:00:00

#OpenMP settings:
export OMP_NUM_THREADS=8
export OMP_PLACES=threads
export OMP_PROC_BIND=spread

# Load Perturbo module
module use /global/cfs/cdirs/m2626/perturbo/
module load perturbo-2.0.2-hsw

# Run perturbo.x (for qe2pert, replace perturbo.x with qe2pert.x)
srun -n 32 -c 8 --cpu_bind=cores perturbo.x -npools 32 -i pert.in > pert.out

Cori KNL

We recommend to use 4 MPI tasks per node and 64 OpenMP threads per MPI task for Cori KNL. Here is a typical submission script (in this example, we use 4 Cori KNL nodes):

#!/bin/bash
#SBATCH --account m1234
#SBATCH -N 4
#SBATCH -C knl
#SBATCH -J perturbo
#SBATCH -t 01:00:00

#OpenMP settings:
export OMP_NUM_THREADS=64
export OMP_PLACES=threads
export OMP_PROC_BIND=spread

# Load Perturbo module
module use /global/cfs/cdirs/m2626/perturbo/
module load perturbo-2.0.2-knl

# Run perturbo.x (for qe2pert, replace perturbo.x with qe2pert.x)
srun -n 16 -c 68 --cpu_bind=cores perturbo.x -npools 16 -i pert.in > pert.out

Using PERTURBO with Quantum Espresso and Wannier90 on NERSC

As stated in the tutorial, before running the main PERTURBO executable, perturbo.x, a user has to perform the SCF, nSCF, PHonon, Wannier90, and qe2pert.x calculations. All these calculations can be done using the NERSC module files. Here, we will go through the steps of the example01-silicon-qe2pert tutorial and provide the necessary list of command for NERSC to create an prefix_epwan.h5 file.

Connect to NERSC Cori and load the following modules:

# Perturbo
module use /global/cfs/cdirs/m2626/perturbo/
module load perturbo-2.0.2-hsw

# Quantum Espresso 7.0
module load espresso/7.0-libxc-5.2.2

# Wannier90
module load wannier90/3.1.0

Copy to your $SCRATCH directory the example01-silicon-qe2pert folder (it can be downloaded here) and navigate there:

cd $SCRATCH/example01-silicon-qe2pert

Enter the interactive mode for 1 node (replace m1234 with the code of your NERSC allocation):

salloc -N 1 -C haswell -q interactive -t 4:00:00 -A m1234 

1. SCF calculation

export OMP_NUM_THREADS=1

Go to the scf folder:

cd ./pw-ph-wann/scf

Run the pw.x executable of Quantum Espresso using 16 MPI tasks:

srun -n 16 pw.x -i scf.in -npools 16 | tee scf.out

2. Phonon calculation

Go to the phonon folder:

cd ../phonon

Copy the tmp folder from the SCF calculation:

cp -r ../scf/tmp/ .

Run the ph.x Quantum Espresso executable with 32 MPI tasks and 32 MPI pools:

srun -n 32 ph.x -i ph.in -npools 32 | tee ph.out

Run the ph-collect.sh script to collect the output files from the phonon mode in the save folder:

./ph-collect.sh

3. nSCF calculation

Go to the nscf_folder:

cd ../nscf

Copy the tmp folder from the SCF calculation:

cp -r ../scf/tmp/ .

Run the pw.x Quantum Espresso executable with 32 MPI tasks and 32 MPI pools:

srun -n 32 pw.x -i nscf.in -npools 32 | tee nscf.out

4. Wannier90 calculation

Go to the wann folder:

cd ../wann

Create a tmp folder and go there:

mkdir tmp

Link the si.save folder from the nSCF calculation into the tmp folder:

ln -sf ../../nscf/tmp/si.save tmp

This step consists of three runs:

  • 1) Run wannier90.x -pp to generate a list of the overlaps (the si.nnkp file):
srun -n 2 wannier90.x -pp si
  • 2) Run pw2wannier.x:
srun -n 16 pw2wannier90.x < pw2wan.in | tee pw2wan.out
  • 3) Run wannier90.x
srun -n 16 wannier90.x si

5. qe2pert step: generation of the epwan file

Go to the qe2pert folder:

cd ../../qe2pert

Create a tmp folder and link the si.save folder from the nSCF calculation:

mkdir tmp
ln -sf ../../pw-ph-wann/nscf/tmp/si.save tmp

Copy the output files from the Wannier90 calculation:

cp ../pw-ph-wann/wann/{si_u.mat,si_u_dis.mat,si_centres.xyz} .

Run qe2pert.x:

srun -n 8 -c 8 --cpu_bind=cores qe2pert.x -npools 8 -i qe2pert.in | tee qe2pert.out

The si_epwan.h5 file should have been generated at this point which is the goal of these five steps.

Once a prefix_epwan.h5 file is obtained, the perturbo.x caclulations on NERSC are very similar to the generic examples given in the PERTURBO tutorials, where the mpirun command should be replaced with srun.

Assuming a pert.in input file exists in a directory as well as other required files for a given PERTURBO calculation mode, one can perturbo.x as follows:

srun -n 8 -c 8 --cpu_bind=cores perturbo.x -npools 8 -i pert.in > pert.out