[Pw_forum] I/O performance on BG/P systems

Nichols A. Romero naromero at gmail.com
Mon Apr 11 20:02:30 CEST 2011


Sorry for not replying earlier, but I missed this e-mail due to the
APS March Meeting.

The GPFS file system on BG/P does a poor job at handling writes to more than
one file per node. My guess is that Gabriele was running QE in either dual
or VN mode (2 and 4 MPI tasks per node, respectively). So on BG/P,
you basically
want to write one file per node (which GPFS is designed to handle) or
one big file
using MPI-I/O.

At ANL, we are thinking about re-writing some of the I/O
using parallel I/O (e.g. HDF5, Parallel NetCDF). The simplest
approach, though highly
unportable, is to use the MPI I/O directly.

Has anyone on this list worked on parallel I/O with QE? Or have any
strong opinions
on this issue?


On Wed, Mar 30, 2011 at 11:57 AM, Paolo Giannozzi
<giannozz at democritos.it> wrote:
>
> On Mar 30, 2011, at 11:20 , Gabriele Sclauzero wrote:
>
>> Do you think that having an additional optional level of I/O
>> (let's say that it might be called "medium")
>
> I propose 'rare', 'medium', 'well done'
>
>> would be too confusing for users?
>
> some users get confused no matter what
>
>> I could try to implement and test it.
>
> ok: just follow the "io_level" variable. Try first to understand
> what the actual behavior is (the documentation is not so
> clear on this point) and then think what it should be, if you
> have some clear ideas
>
> P.
> ---
> Paolo Giannozzi, Dept of Chemistry&Physics&Environment,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
>
>
>
> _______________________________________________
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>



-- 
Nichols A. Romero, Ph.D.
Argonne Leadership Computing Facility
Argonne, IL 60490
(630) 447-9793


More information about the Pw_forum mailing list