[Pw_forum] k point parallel

Giovanni Cantele Giovanni.Cantele at na.infn.it
Tue Oct 25 14:57:09 CEST 2005


Jian ZHOU wrote:

> Dear all
>
> It is said in the manual that pwscf can be paralleled by "k points 
> parallelization" or by " PW parallelization". Since in our Gigabit 
> ethernet, the parallelization performances is not so good, so, is it 
> possible for pwscf to parallel only on the k points, not on the PW? 
>
> Thank you.
>
> Best wishes,
>
> Jian
>
>
Try to use pools (for k-point) parallelization (see
http://www.pwscf.org/guide/2.1.4/html-single/manual.html#SECTION00040000000000000000
for running on parallel machines )

For example, running on 16 cpus with no pools the beginning of the 
output looks as follows:

      Program PWSCF     v.3.0    starts ...
     Today is 25Oct2005 at 14:49:47

     Parallel version (MPI)

     Number of processors in use:      16
     R & G space division:  proc/pool =   16

     Ultrasoft (Vanderbilt) Pseudopotentials

     Current dimensions of program pwscf are:

     ntypx = 10   npk = 40000  lmax =  3
     nchix =  6  ndmx =  2000  nbrx = 14  nqfx =  8

     Planes per process (thick) : nr3 = 96 npp =   6 ncplane = 9216

 Proc/  planes cols    G   planes cols    G    columns  G
 Pool       (dense grid)      (smooth grid)   (wavefct grid)
  1      6    447  28493    6    447  28493  119   3935
  2      6    448  28494    6    448  28494  119   3935
  3      6    447  28491    6    447  28491  119   3935
  4      6    447  28491    6    447  28491  119   3935
  5      6    447  28491    6    447  28491  119   3935
  6      6    447  28491    6    447  28491  119   3933
  7      6    447  28491    6    447  28491  119   3933
  8      6    447  28491    6    447  28491  119   3933
  9      6    447  28493    6    447  28493  119   3935
 10      6    447  28491    6    447  28491  119   3935
 11      6    447  28491    6    447  28491  118   3930
 12      6    447  28491    6    447  28491  118   3930
 13      6    447  28491    6    447  28491  118   3930
 14      6    447  28491    6    447  28491  119   3933
 15      6    447  28491    6    447  28491  119   3933
 16      6    447  28491    6    447  28491  119   3933
  0     96   7153 455863   96   7153 455863 1901  62933 


....

in which only PW parallelization occurred.
If you run with pools,
within each pool the R & G space division occurs on a number of processors
which is the total number divided by the number of pools.
So, if you use as many pools as the total number of cpus non
PW parallelization should occur.
For example running on 16 cpus with 16 pools you find

     Program PWSCF     v.3.0    starts ...
     Today is 25Oct2005 at 14:47:28

     Parallel version (MPI)

     Number of processors in use:      16
     K-points division:     npool     =   16

     Ultrasoft (Vanderbilt) Pseudopotentials

     Current dimensions of program pwscf are:

     ntypx = 10   npk = 40000  lmax =  3
     nchix =  6  ndmx =  2000  nbrx = 14  nqfx =  8

     Planes per process (thick) : nr3 = 96 npp =  96 ncplane = 9216

 Proc/  planes cols    G   planes cols    G    columns  G
 Pool       (dense grid)      (smooth grid)   (wavefct grid)
  1     96   7153 455863   96   7153 455863 1901  62933
  0     96   7153 455863   96   7153 455863 1901  62933
...

and you see that there is one cpu for each pool and so no
parallelization on PWs can be done.


Giovanni

-- 




Dr. Giovanni Cantele
Coherentia CNR-INFM and Dipartimento di Scienze Fisiche
Universita' di Napoli "Federico II"
Complesso Universitario di Monte S. Angelo - Ed. G
Via Cintia, I-80126, Napoli, Italy
Phone: +39 081 676910
Fax:   +39 081 676346
E-mail: Giovanni.Cantele at na.infn.it
Web: http://people.na.infn.it/~cantele

****************
RICERCA E FUTURO CAMMINANO INSIEME:
  DA 41 MESI I LAVORATORI DELLA
  RICERCA HANNO IL CONTRATTO SCADUTO,
  IL FUTURO NON DEVE SCADERE!
  per maggiori info: http://www.ge.cnr.it/rsu/rsu.htm
****************




More information about the Pw_forum mailing list