Hello,
I am trying to run a singleshot GW calculation on a monolayer with a 2x2x1 kgrid. The primitive cell for this monolayer has 75 atoms in it.
The calculation is running extremely slow  about 2% per day, which doesn't seem right to me. I've tried messing around with the number of processors and some of the parallelization settings in the input and nothing seems to be helping. Is there something I might be missing that could help this go faster?
I'm wondering if maybe I have the energies set too high.
Here is the input file:
#
#
# Y88b / e e e 888~~\ ,88~_
# Y88b / d8b d8b d8b 888  d888 \
# Y88b/ /Y88b d888bdY88b 888 _/ 88888 
# Y8Y / Y88b / Y88Y Y888b 888 \ 88888 
# Y /____Y88b / YY Y888b 888  Y888 /
# / / Y88b / Y888b 888__/ `88_~
#
#
# Version 5.1.0 Revision 21422 Hash (prev commit) fde6e2a07
# Branch is
# MPI+SLK+HDF5_IO Build
# http://www.yambocode.org
#
ppa # [R][Xp] Plasmon Pole Approximation for the Screened Interaction
dyson # [R] Dyson Equation solver
gw0 # [R] GW approximation
rim_cut # [R] Coulomb potential
HF_and_locXC # [R] HartreeFock
em1d # [R][X] Dynamically Screened Interaction
FFTGvecs= 40 Ry # [FFT] Planewaves
RandQpts=0 # [RIM] Number of random qpoints in the BZ
RandGvec= 1 RL # [RIM] Coulomb interaction RS components
#QpgFull # [F RIM] Coulomb interaction: Full matrix
% Em1Anys
0.000000  0.000000  0.000000  # [RIM] X Y Z Static Inverse dielectric matrix Anysotropy
%
IDEm1Ref=0 # [RIM] Dielectric matrix reference component 1(x)/2(y)/3(z)
CUTGeo= "none" # [CUT] Coulomb Cutoff geometry: box/cylinder/sphere/ws/slab X/Y/Z/XY..
% CUTBox
0.000000  0.000000  0.000000  # [CUT] [au] Box sides
%
CUTRadius= 0.000000 # [CUT] [au] Sphere/Cylinder radius
CUTCylLen= 0.000000 # [CUT] [au] Cylinder length
CUTwsGvec= 0.700000 # [CUT] WS cutoff: number of G to be modified
#CUTCol_test # [CUT] Perform a cutoff test in Rspace
EXXRLvcs= 1897549 RL # [XX] Exchange RL components
VXCRLvcs= 1897549 RL # [XC] XCpotential RL components
Chimod= "HARTREE" # [X] IP/Hartree/ALDA/LRC/PF/BSfxc
% BndsRnXp
1  1000  # [Xp] Polarization function bands
%
NGsBlkXp= 6 Ry # [Xp] Response block size
% LongDrXp
1.000000  1.000000  1.000000  # [Xp] [cc] Electric Field
%
PPAPntXp= 27.21138 eV # [Xp] PPA imaginary energy
XTermKind= "none" # [X] X terminator ("none","BG" BrunevalGonze)
% GbndRnge
1  1000  # [GW] G[W] bands range
%
GTermKind= "BG" # [GW] GW terminator ("none","BG" BrunevalGonze,"BRS" BergerReiningSottile)
DysSolver= "n" # [GW] Dyson Equation solver ("n","s","g")
%QPkrange # [GW] QP generalized Kpoint/Band indices
1411000
%
Thank you!
Eoghan
Monolayer calculations running horribly slow
Moderators: Davide Sangalli, andrea.ferretti, myrta gruning, andrea marini, Daniele Varsano

 Posts: 1
 Joined: Fri Feb 23, 2024 8:09 pm
Monolayer calculations running horribly slow
Last edited by EoghanG on Mon Feb 26, 2024 8:05 pm, edited 1 time in total.

 Posts: 51
 Joined: Tue May 31, 2016 8:02 am
Re: Monolayer calculations running horribly slow
Dear Eoghan,
Maybe too large QPkrange causes this issue. Please make sure you need so large range for band index.
I don't think you need to set such a large number (i.e., 1000) for band range in QPkrange.
In most cases, we only need to consider the frontier orbitals.
BndsRnXp and GbndRnge can also be reduced based on the convergence test.
Maybe too large QPkrange causes this issue. Please make sure you need so large range for band index.
I don't think you need to set such a large number (i.e., 1000) for band range in QPkrange.
In most cases, we only need to consider the frontier orbitals.
BndsRnXp and GbndRnge can also be reduced based on the convergence test.
Youzhao Lan
College of Chemistry and Materials Science,
Zhejiang Normal University,
Jinhua, Zhejiang, China.
HomePage: http://blog.sciencenet.cn/u/lyzhao
College of Chemistry and Materials Science,
Zhejiang Normal University,
Jinhua, Zhejiang, China.
HomePage: http://blog.sciencenet.cn/u/lyzhao