Restarting from CHGCAR with PAW, in parallel gets wrong augmentation charge
Posted: Thu Jan 18, 2007 9:25 pm
I'm using VASP 4.6.28, running on an opteron, in parallel, and I've encountered what looks like a strange bug when restarting a job using just a CHGCAR and no WAVECAR file. It only happens when running parallel jobs as well, not in the serial version of the code. Namely, it appears that the augmentation charge is not correctly read in, or transmitted to the nodes.
For example, on a single atom bcc Fe calculation with PAW, where I've restarted after getting a self-consistent charge density (non-spin polarized, but the same thing happens spin-polarized), when I grep magnetization on OUTCAR after a restarted run, I get:
number of electron 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization # here's where RMMS kicks in
augmentation part 4.1432529 magnetization
number of electron 8.0000076 magnetization
augmentation part 4.1432197 magnetization
number of electron 8.0000076 magnetization
augmentation part 4.1432197 magnetization
So, in the beginning, the augmentation part is exactly what the number of electrons part is; after the charge is unfrozen, it changes. I originally encountered this problem with a spin-polarized system (there, the magnetization also erroneously matches). If I (a) restart with WAVECAR, or (b) restart on a serial machine, it reads the augmentation charge correctly. I don't think it's purely a reporting error at this stage in the code, as it takes a few iteration after the charge density is relaxed to get self-consistency even after restarting with a self-consistent charge density. If I do this same run (single atom bcc Fe calculation) on a serial machine, it gets the wavefunction in one self-consistency iteration, and the augmentation charge density is right from the beginning.
I've looked around, and haven't seen anyone else post with this specific problem, but I don't know how many people are trying to restart calculations using the CHGCAR only with PAW in parallel. Thanks; --d
For example, on a single atom bcc Fe calculation with PAW, where I've restarted after getting a self-consistent charge density (non-spin polarized, but the same thing happens spin-polarized), when I grep magnetization on OUTCAR after a restarted run, I get:
number of electron 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization # here's where RMMS kicks in
augmentation part 4.1432529 magnetization
number of electron 8.0000076 magnetization
augmentation part 4.1432197 magnetization
number of electron 8.0000076 magnetization
augmentation part 4.1432197 magnetization
So, in the beginning, the augmentation part is exactly what the number of electrons part is; after the charge is unfrozen, it changes. I originally encountered this problem with a spin-polarized system (there, the magnetization also erroneously matches). If I (a) restart with WAVECAR, or (b) restart on a serial machine, it reads the augmentation charge correctly. I don't think it's purely a reporting error at this stage in the code, as it takes a few iteration after the charge density is relaxed to get self-consistency even after restarting with a self-consistent charge density. If I do this same run (single atom bcc Fe calculation) on a serial machine, it gets the wavefunction in one self-consistency iteration, and the augmentation charge density is right from the beginning.
I've looked around, and haven't seen anyone else post with this specific problem, but I don't know how many people are trying to restart calculations using the CHGCAR only with PAW in parallel. Thanks; --d