The public version of PERTURBO has the following stable features:
- Phonon-limited carrier mobility, electrical conductivity and Seebeck coefficient
- Imaginary part of e-ph self-energy and e-ph scattering rates
- Phonon-limited carrier mean free path and relaxation times
- Magnetotransport calculations
- Ultrafast carrier dynamics with fixed phonon occupation
- Electron transport in the presence of high electric fields
- Calculations on magnetic systems with collinear spin
- Interpolated electronic band structure and phonon dispersion
- e-ph matrix elements for nonpolar and polar materials, and their Wannier interpolation
- Interface to TDEP for anharmonic phonons
All the calculations above can be done as a function of temperature and doping, for nonpolar and polar materials.
A brief summary of the PERTURBO calculation modes with the required input files can be found in the Interactive workflow section. For a detailed description of each calculation mode, please refer to the tutorial.
Code Performance and Scaling / Parallelization
This section discusses the scaling performance of the publicly available version of PERTURBO. Since its inception, PERTURBO implemented a hybrid MPI / OpenMP parallelization that allows for outstanding scaling on high-performance computing (HPC) platforms. To showcase the scaling performance, we present a calculation of the imaginary part of the electron-phonon self energy (calculation mode imsigma) in silicon using 72x72x72 electron k- and phonon q-point grids. The scaling test was performed using the Intel Xeon Phi 7250 Processors at the National Energy Research Scientific Computing Center (NERSC). As seen from this figure, PERTURBO shows an almost linear scaling up to 500,000 cores (the deviation from the linear scaling at 500,000 cores is less than 5%). This result, together with our ongoing work on the OpenACC GPU parallelization, shows the preparedness of PERTURBO for the future HPC architectures and for the Exascale computing. We also plan to make Perturbo available as a module on NERSC in the near future.