[Pw_forum] Efficient parallelization

Axel Kohlmeyer axel.kohlmeyer at theochem.ruhr-uni-bochum.de
Wed Mar 30 09:50:39 CEST 2005


On Tue, 29 Mar 2005 21:14:57 +0400 (MSD)  "Sergey Lisenkov" wrote:

> Dear PWscf authors and users,


hi sergey,

> I want to use efficiently pwscf code during the run, i.g. to use both k-point and G-parallelizations (
reduce memory and speed up, if possible).
> 
> For example, I have 4k-points and the next FFT mesh:
> 
>     FFT grid: ( 60,264,264)
>  smooth grid: ( 44,192,192)
> 
> I see that for G-level I can use only 24 processors. I also can use this number of cpus for k-points:
> 
> mpirun -np 24 ./pw.x -npool -in file.inp

 i assume you have -npool 4 here, right?
 
> 
> I see at output:
> 
>      Number of processors in use:      24
>      K-points division:     npool  =    4
>      R & G space division:  nprocp =    6
> 
> I see for k-points parallization everything is OK. But Is everything OK for G-parallelization? 

looks perfectly. 6 times 4 is 24. the parallelization is first
across the k-points, since those calculations are almost independent
AND THEN for EACH k-point across g/r-space. the latter parallelization
is less efficient, so the choice of the -npool parameters is very
important for efficient use of your cpu resources (if you have
multiple k-points). this is explained in the manual in more detail.

regards,
	axel.

> 
> Thanks,
>   Sergey
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
> 

--

=======================================================================
Axel Kohlmeyer      e-mail:  axel.kohlmeyer at theochem.ruhr-uni-bochum.de
Lehrstuhl fuer Theoretische Chemie          Phone: ++49 (0)234/32-26673
Ruhr-Universitaet Bochum - NC 03/53         Fax:   ++49 (0)234/32-14045
D-44780 Bochum                   http://www.theochem.ruhr-uni-bochum.de
=======================================================================
If you make something idiot-proof, the universe creates a better idiot.



More information about the Pw_forum mailing list