<DIV>在009-09-23&nbsp;18:46:48,pw_forum-request@pwscf.org&nbsp;写道:<BR>&gt;Send&nbsp;Pw_forum&nbsp;mailing&nbsp;list&nbsp;submissions&nbsp;to<BR>&gt;        pw_forum@pwscf.org<BR>&gt;<BR>&gt;To&nbsp;subscribe&nbsp;or&nbsp;unsubscribe&nbsp;via&nbsp;the&nbsp;World&nbsp;Wide&nbsp;Web,&nbsp;visit<BR>&gt;        http://www.democritos.it/mailman/listinfo/pw_forum<BR>&gt;or,&nbsp;via&nbsp;email,&nbsp;send&nbsp;a&nbsp;message&nbsp;with&nbsp;subject&nbsp;or&nbsp;body&nbsp;'help'&nbsp;to<BR>&gt;        pw_forum-request@pwscf.org<BR>&gt;<BR>&gt;You&nbsp;can&nbsp;reach&nbsp;the&nbsp;person&nbsp;managing&nbsp;the&nbsp;list&nbsp;at<BR>&gt;        pw_forum-owner@pwscf.org<BR>&gt;<BR>&gt;When&nbsp;replying,&nbsp;please&nbsp;edit&nbsp;your&nbsp;Subject&nbsp;line&nbsp;so&nbsp;it&nbsp;is&nbsp;more&nbsp;specific<BR>&gt;than&nbsp;"Re:&nbsp;Contents&nbsp;of&nbsp;Pw_forum&nbsp;digest..."<BR>&gt;<BR>&gt;<BR>&gt;Today's&nbsp;Topics:<BR>&gt;<BR>&gt;&nbsp;&nbsp;&nbsp;1.&nbsp;how&nbsp;to&nbsp;improve&nbsp;the&nbsp;calculation&nbsp;speed&nbsp;?&nbsp;(wangqj1)<BR>&gt;&nbsp;&nbsp;&nbsp;2.&nbsp;Re:&nbsp;how&nbsp;to&nbsp;improve&nbsp;the&nbsp;calculation&nbsp;speed&nbsp;?&nbsp;(Giovanni&nbsp;Cantele)<BR>&gt;&nbsp;&nbsp;&nbsp;3.&nbsp;Re:&nbsp;how&nbsp;to&nbsp;improve&nbsp;the&nbsp;calculation&nbsp;speed&nbsp;?&nbsp;(Lorenzo&nbsp;Paulatto)<BR>&gt;&nbsp;&nbsp;&nbsp;4.&nbsp;write&nbsp;occupancy&nbsp;(ali&nbsp;kazempour)<BR>&gt;&nbsp;&nbsp;&nbsp;5.&nbsp;Re:&nbsp;write&nbsp;occupancy&nbsp;(Prasenjit&nbsp;Ghosh)<BR>&gt;<BR>&gt;<BR>&gt;----------------------------------------------------------------------<BR>&gt;<BR>&gt;Message:&nbsp;1<BR>&gt;Date:&nbsp;Wed,&nbsp;23&nbsp;Sep&nbsp;2009&nbsp;16:05:46&nbsp;+0800&nbsp;(CST)<BR>&gt;From:&nbsp;wangqj1&nbsp;&lt;wangqj1@126.com&gt;<BR>&gt;Subject:&nbsp;[Pw_forum]&nbsp;how&nbsp;to&nbsp;improve&nbsp;the&nbsp;calculation&nbsp;speed&nbsp;?<BR>&gt;To:&nbsp;pw_forum&nbsp;&lt;pw_forum@pwscf.org&gt;<BR>&gt;Message-ID:<BR>&gt;        &lt;21870763.369701253693146938.JavaMail.coremail@bj126app103.126.com&gt;<BR>&gt;Content-Type:&nbsp;text/plain;&nbsp;charset="gbk"<BR>&gt;<BR>&gt;<BR>&gt;Dear&nbsp;PWSCF&nbsp;users<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;When&nbsp;I&nbsp;use&nbsp;R&nbsp;and&nbsp;G&nbsp;parallelization&nbsp;to&nbsp;run&nbsp;job&nbsp;,it&nbsp;as&nbsp;if&nbsp;wait&nbsp;for&nbsp;the&nbsp;input&nbsp;.&nbsp;According&nbsp;&nbsp;peoples&nbsp;advice&nbsp;,I&nbsp;use&nbsp;k-point&nbsp;parallelization&nbsp;,it&nbsp;runs&nbsp;well&nbsp;.&nbsp;But&nbsp;it&nbsp;runs&nbsp;too&nbsp;slow&nbsp;.The&nbsp;information&nbsp;I&nbsp;can&nbsp;offerred&nbsp;as&nbsp;following:<BR>&gt;(1)&nbsp;:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CUP&nbsp;usage&nbsp;of&nbsp;one&nbsp;node&nbsp;is&nbsp;as<BR>&gt;Tasks:&nbsp;143&nbsp;total,&nbsp;&nbsp;10&nbsp;running,&nbsp;133&nbsp;sleeping,&nbsp;&nbsp;&nbsp;0&nbsp;stopped,&nbsp;&nbsp;&nbsp;0&nbsp;zombie<BR>&gt;Cpu0&nbsp;&nbsp;:&nbsp;99.7%us,&nbsp;&nbsp;0.3%sy,&nbsp;&nbsp;0.0%ni,&nbsp;&nbsp;0.0%id,&nbsp;&nbsp;0.0%wa,&nbsp;&nbsp;0.0%hi,&nbsp;&nbsp;0.0%si,&nbsp;&nbsp;0.0%st<BR>&gt;Cpu1&nbsp;&nbsp;:100.0%us,&nbsp;&nbsp;0.0%sy,&nbsp;&nbsp;0.0%ni,&nbsp;&nbsp;0.0%id,&nbsp;&nbsp;0.0%wa,&nbsp;&nbsp;0.0%hi,&nbsp;&nbsp;0.0%si,&nbsp;&nbsp;0.0%st<BR>&gt;Cpu2&nbsp;&nbsp;:100.0%us,&nbsp;&nbsp;0.0%sy,&nbsp;&nbsp;0.0%ni,&nbsp;&nbsp;0.0%id,&nbsp;&nbsp;0.0%wa,&nbsp;&nbsp;0.0%hi,&nbsp;&nbsp;0.0%si,&nbsp;&nbsp;0.0%st<BR>&gt;Cpu3&nbsp;&nbsp;:100.0%us,&nbsp;&nbsp;0.0%sy,&nbsp;&nbsp;0.0%ni,&nbsp;&nbsp;0.0%id,&nbsp;&nbsp;0.0%wa,&nbsp;&nbsp;0.0%hi,&nbsp;&nbsp;0.0%si,&nbsp;&nbsp;0.0%st<BR>&gt;Cpu4&nbsp;&nbsp;:100.0%us,&nbsp;&nbsp;0.0%sy,&nbsp;&nbsp;0.0%ni,&nbsp;&nbsp;0.0%id,&nbsp;&nbsp;0.0%wa,&nbsp;&nbsp;0.0%hi,&nbsp;&nbsp;0.0%si,&nbsp;&nbsp;0.0%st<BR>&gt;Cpu5&nbsp;&nbsp;:100.0%us,&nbsp;&nbsp;0.0%sy,&nbsp;&nbsp;0.0%ni,&nbsp;&nbsp;0.0%id,&nbsp;&nbsp;0.0%wa,&nbsp;&nbsp;0.0%hi,&nbsp;&nbsp;0.0%si,&nbsp;&nbsp;0.0%st<BR>&gt;Cpu6&nbsp;&nbsp;:100.0%us,&nbsp;&nbsp;0.0%sy,&nbsp;&nbsp;0.0%ni,&nbsp;&nbsp;0.0%id,&nbsp;&nbsp;0.0%wa,&nbsp;&nbsp;0.0%hi,&nbsp;&nbsp;0.0%si,&nbsp;&nbsp;0.0%st<BR>&gt;Cpu7&nbsp;&nbsp;:100.0%us,&nbsp;&nbsp;0.0%sy,&nbsp;&nbsp;0.0%ni,&nbsp;&nbsp;0.0%id,&nbsp;&nbsp;0.0%wa,&nbsp;&nbsp;0.0%hi,&nbsp;&nbsp;0.0%si,&nbsp;&nbsp;0.0%st<BR>&gt;Mem:&nbsp;&nbsp;&nbsp;8044120k&nbsp;total,&nbsp;&nbsp;6683720k&nbsp;used,&nbsp;&nbsp;1360400k&nbsp;free,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1632k&nbsp;buffers<BR>&gt;Swap:&nbsp;&nbsp;4192956k&nbsp;total,&nbsp;&nbsp;2096476k&nbsp;used,&nbsp;&nbsp;2096480k&nbsp;free,&nbsp;&nbsp;1253712k&nbsp;cached<BR>&gt;<BR>&gt;(2)&nbsp;The&nbsp;input&nbsp;file&nbsp;of&nbsp;PBS<BR>&gt;#!/bin/sh<BR>&gt;#PBS&nbsp;-j&nbsp;oe<BR>&gt;#PBS&nbsp;-N&nbsp;pw<BR>&gt;#PBS&nbsp;-l&nbsp;nodes=1:ppn=8<BR>&gt;#PBS&nbsp;-q&nbsp;small&nbsp;&nbsp;&nbsp;<BR>&gt;cd&nbsp;$PBS_O_WORKDIR<BR>&gt;hostname<BR>&gt;/usr/local/bin/mpirun&nbsp;-np&nbsp;8&nbsp;-machinefile&nbsp;$PBS_NODEFILE&nbsp;/home/wang/bin/pw.x&nbsp;-npool&nbsp;8&nbsp;-in&nbsp;ZnO.pw.inp&gt;ZnO.pw.out<BR>&gt;(3)<BR>&gt;wang@node22:~&gt;&nbsp;netstat&nbsp;-s<BR>&gt;Ip:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;1894215181&nbsp;total&nbsp;packets&nbsp;received<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;0&nbsp;forwarded<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;0&nbsp;incoming&nbsp;packets&nbsp;discarded<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;1894215181&nbsp;incoming&nbsp;packets&nbsp;delivered<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;979205769&nbsp;requests&nbsp;sent&nbsp;out<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;30&nbsp;fragments&nbsp;received&nbsp;ok<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;60&nbsp;fragments&nbsp;created<BR>&gt;Icmp:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;2&nbsp;ICMP&nbsp;messages&nbsp;received<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;1&nbsp;input&nbsp;ICMP&nbsp;message&nbsp;failed.<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;ICMP&nbsp;input&nbsp;histogram:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;destination&nbsp;unreachable:&nbsp;2<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;2&nbsp;ICMP&nbsp;messages&nbsp;sent<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;0&nbsp;ICMP&nbsp;messages&nbsp;failed<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;ICMP&nbsp;output&nbsp;histogram:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;destination&nbsp;unreachable:&nbsp;2<BR>&gt;IcmpMsg:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;InType3:&nbsp;2<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;OutType3:&nbsp;2<BR>&gt;Tcp:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;5662&nbsp;active&nbsp;connections&nbsp;openings<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;9037&nbsp;passive&nbsp;connection&nbsp;openings<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;68&nbsp;failed&nbsp;connection&nbsp;attempts<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;1&nbsp;connection&nbsp;resets&nbsp;received<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;18&nbsp;connections&nbsp;established<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;1894049565&nbsp;segments&nbsp;received<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;979043182&nbsp;segments&nbsp;send&nbsp;out<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;284&nbsp;segments&nbsp;retransmited<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;0&nbsp;bad&nbsp;segments&nbsp;received.<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;55&nbsp;resets&nbsp;sent<BR>&gt;Udp:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;165614&nbsp;packets&nbsp;received<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;0&nbsp;packets&nbsp;to&nbsp;unknown&nbsp;port&nbsp;received.<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;0&nbsp;packet&nbsp;receive&nbsp;errors<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;162301&nbsp;packets&nbsp;sent<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;RcvbufErrors:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;SndbufErrors:&nbsp;0<BR>&gt;UdpLite:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;InDatagrams:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;NoPorts:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;InErrors:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;OutDatagrams:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;RcvbufErrors:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;SndbufErrors:&nbsp;0<BR>&gt;TcpExt:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;10&nbsp;resets&nbsp;received&nbsp;for&nbsp;embryonic&nbsp;SYN_RECV&nbsp;sockets<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;ArpFilter:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;5691&nbsp;TCP&nbsp;sockets&nbsp;finished&nbsp;time&nbsp;wait&nbsp;in&nbsp;fast&nbsp;timer<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;25&nbsp;time&nbsp;wait&nbsp;sockets&nbsp;recycled&nbsp;by&nbsp;time&nbsp;stamp<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;17369935&nbsp;delayed&nbsp;acks&nbsp;sent<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;1700&nbsp;delayed&nbsp;acks&nbsp;further&nbsp;delayed&nbsp;because&nbsp;of&nbsp;locked&nbsp;socket<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;18&nbsp;packets&nbsp;directly&nbsp;queued&nbsp;to&nbsp;recvmsg&nbsp;prequeue.<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;8140&nbsp;packets&nbsp;directly&nbsp;received&nbsp;from&nbsp;backlog<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;1422037027&nbsp;packets&nbsp;header&nbsp;predicted<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;7&nbsp;packets&nbsp;header&nbsp;predicted&nbsp;and&nbsp;directly&nbsp;queued&nbsp;to&nbsp;user<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPPureAcks:&nbsp;2794058<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPHPAcks:&nbsp;517887764<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPRenoRecovery:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSackRecovery:&nbsp;56<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSACKReneging:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPFACKReorder:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSACKReorder:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPRenoReorder:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPTSReorder:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPFullUndo:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPPartialUndo:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPDSACKUndo:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPLossUndo:&nbsp;1<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPLoss:&nbsp;357<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPLostRetransmit:&nbsp;6<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPRenoFailures:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSackFailures:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPLossFailures:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPFastRetrans:&nbsp;235<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPForwardRetrans:&nbsp;46<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSlowStartRetrans:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPTimeouts:&nbsp;3<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPRenoRecoveryFail:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSackRecoveryFail:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSchedulerFailed:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPRcvCollapsed:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPDSACKOldSent:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPDSACKOfoSent:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPDSACKRecv:&nbsp;2<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPDSACKOfoRecv:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPAbortOnSyn:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPAbortOnData:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPAbortOnClose:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPAbortOnMemory:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPAbortOnTimeout:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPAbortOnLinger:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPAbortFailed:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPMemoryPressures:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSACKDiscard:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPDSACKIgnoredOld:&nbsp;1<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPDSACKIgnoredNoUndo:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPSpuriousRTOs:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPMD5NotFound:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;TCPMD5Unexpected:&nbsp;0<BR>&gt;IpExt:<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;InNoRoutes:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;InTruncatedPkts:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;InMcastPkts:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;OutMcastPkts:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;InBcastPkts:&nbsp;0<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;OutBcastPkts:&nbsp;0<BR>&gt;when&nbsp;I&nbsp;install&nbsp;the&nbsp;PWSCF&nbsp;,I&nbsp;only&nbsp;use&nbsp;the&nbsp;command&nbsp;line&nbsp;:<BR>&gt;./configure&nbsp;<BR>&gt;make&nbsp;all&nbsp;.<BR>&gt;And&nbsp;it&nbsp;install&nbsp;successful&nbsp;.<BR>&gt;&nbsp;<BR>&gt;I&nbsp;don't&nbsp;know&nbsp;why&nbsp;it&nbsp;run&nbsp;so&nbsp;slow&nbsp;,how&nbsp;to&nbsp;solve&nbsp;this&nbsp;problem&nbsp;?&nbsp;Any&nbsp;advice&nbsp;will&nbsp;be&nbsp;appreciated&nbsp;!<BR>&gt;&nbsp;<BR>&gt;Best&nbsp;regard<BR>&gt;Q&nbsp;.&nbsp;J.&nbsp;Wang&nbsp;<BR>&gt;XiangTan&nbsp;University&nbsp;<BR>&gt;<BR>&gt;<BR>&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<BR>&gt;--------------&nbsp;next&nbsp;part&nbsp;--------------<BR>&gt;An&nbsp;HTML&nbsp;attachment&nbsp;was&nbsp;scrubbed...<BR>&gt;URL:&nbsp;http://www.democritos.it/pipermail/pw_forum/attachments/20090923/f82797a3/attachment-0001.htm&nbsp;<BR>&gt;<BR>&gt;------------------------------<BR>&gt;<BR>&gt;Message:&nbsp;2<BR>&gt;Date:&nbsp;Wed,&nbsp;23&nbsp;Sep&nbsp;2009&nbsp;10:45:51&nbsp;+0200<BR>&gt;From:&nbsp;Giovanni&nbsp;Cantele&nbsp;&lt;Giovanni.Cantele@na.infn.it&gt;<BR>&gt;Subject:&nbsp;Re:&nbsp;[Pw_forum]&nbsp;how&nbsp;to&nbsp;improve&nbsp;the&nbsp;calculation&nbsp;speed&nbsp;?<BR>&gt;To:&nbsp;PWSCF&nbsp;Forum&nbsp;&lt;pw_forum@pwscf.org&gt;<BR>&gt;Message-ID:&nbsp;&lt;4AB9E03F.7080600@na.infn.it&gt;<BR>&gt;Content-Type:&nbsp;text/plain;&nbsp;charset=x-gbk;&nbsp;format=flowed<BR>&gt;<BR>&gt;wangqj1&nbsp;wrote:<BR>&gt;&gt;<BR>&gt;&gt;&nbsp;Dear&nbsp;PWSCF&nbsp;users<BR>&gt;&gt;&nbsp;When&nbsp;I&nbsp;use&nbsp;R&nbsp;and&nbsp;G&nbsp;parallelization&nbsp;to&nbsp;run&nbsp;job&nbsp;,it&nbsp;as&nbsp;if&nbsp;wait&nbsp;for&nbsp;the&nbsp;<BR>&gt;&gt;&nbsp;input&nbsp;.<BR>&gt;<BR>&gt;What&nbsp;does&nbsp;it&nbsp;mean?&nbsp;Does&nbsp;it&nbsp;print&nbsp;the&nbsp;output&nbsp;header&nbsp;or&nbsp;the&nbsp;output&nbsp;up&nbsp;to&nbsp;<BR>&gt;some&nbsp;point&nbsp;or&nbsp;nothing&nbsp;happens?</DIV><PRE>It only print the output heander and not have iteration .</PRE><PRE>
&gt;
&gt;&gt;&nbsp;According&nbsp;peoples&nbsp;advice&nbsp;,I&nbsp;use&nbsp;k-point&nbsp;parallelization&nbsp;,it&nbsp;runs&nbsp;well&nbsp;
&gt;&gt;&nbsp;.&nbsp;But&nbsp;it&nbsp;runs&nbsp;too&nbsp;slow&nbsp;.The&nbsp;information&nbsp;I&nbsp;can&nbsp;offerred&nbsp;as&nbsp;following:
&gt;&gt;&nbsp;(1)&nbsp;:&nbsp;CUP&nbsp;usage&nbsp;of&nbsp;one&nbsp;node&nbsp;is&nbsp;as
&gt;&gt;&nbsp;Tasks:&nbsp;143&nbsp;total,&nbsp;10&nbsp;running,&nbsp;133&nbsp;sleeping,&nbsp;0&nbsp;stopped,&nbsp;0&nbsp;zombie
&gt;&gt;&nbsp;Cpu0&nbsp;:&nbsp;99.7%us,&nbsp;0.3%sy,&nbsp;0.0%ni,&nbsp;0.0%id,&nbsp;0.0%wa,&nbsp;0.0%hi,&nbsp;0.0%si,&nbsp;0.0%st
&gt;&gt;&nbsp;Cpu1&nbsp;:100.0%us,&nbsp;0.0%sy,&nbsp;0.0%ni,&nbsp;0.0%id,&nbsp;0.0%wa,&nbsp;0.0%hi,&nbsp;0.0%si,&nbsp;0.0%st
&gt;&gt;&nbsp;Cpu2&nbsp;:100.0%us,&nbsp;0.0%sy,&nbsp;0.0%ni,&nbsp;0.0%id,&nbsp;0.0%wa,&nbsp;0.0%hi,&nbsp;0.0%si,&nbsp;0.0%st
&gt;&gt;&nbsp;Cpu3&nbsp;:100.0%us,&nbsp;0.0%sy,&nbsp;0.0%ni,&nbsp;0.0%id,&nbsp;0.0%wa,&nbsp;0.0%hi,&nbsp;0.0%si,&nbsp;0.0%st
&gt;&gt;&nbsp;Cpu4&nbsp;:100.0%us,&nbsp;0.0%sy,&nbsp;0.0%ni,&nbsp;0.0%id,&nbsp;0.0%wa,&nbsp;0.0%hi,&nbsp;0.0%si,&nbsp;0.0%st
&gt;&gt;&nbsp;Cpu5&nbsp;:100.0%us,&nbsp;0.0%sy,&nbsp;0.0%ni,&nbsp;0.0%id,&nbsp;0.0%wa,&nbsp;0.0%hi,&nbsp;0.0%si,&nbsp;0.0%st
&gt;&gt;&nbsp;Cpu6&nbsp;:100.0%us,&nbsp;0.0%sy,&nbsp;0.0%ni,&nbsp;0.0%id,&nbsp;0.0%wa,&nbsp;0.0%hi,&nbsp;0.0%si,&nbsp;0.0%st
&gt;&gt;&nbsp;Cpu7&nbsp;:100.0%us,&nbsp;0.0%sy,&nbsp;0.0%ni,&nbsp;0.0%id,&nbsp;0.0%wa,&nbsp;0.0%hi,&nbsp;0.0%si,&nbsp;0.0%st
&gt;&gt;&nbsp;Mem:&nbsp;8044120k&nbsp;total,&nbsp;6683720k&nbsp;used,&nbsp;1360400k&nbsp;free,&nbsp;1632k&nbsp;buffers
&gt;&gt;&nbsp;Swap:&nbsp;4192956k&nbsp;total,&nbsp;2096476k&nbsp;used,&nbsp;2096480k&nbsp;free,&nbsp;1253712k&nbsp;cached
&gt;I'm&nbsp;not&nbsp;very&nbsp;expert&nbsp;about&nbsp;reading&nbsp;such&nbsp;information,&nbsp;but&nbsp;it&nbsp;seams&nbsp;that&nbsp;
&gt;your&nbsp;node&nbsp;is&nbsp;making&nbsp;swap,&nbsp;maybe&nbsp;because&nbsp;the&nbsp;job&nbsp;is&nbsp;requiring&nbsp;too&nbsp;much&nbsp;
&gt;memory&nbsp;with&nbsp;respect&nbsp;to&nbsp;the&nbsp;available&nbsp;one.&nbsp;This&nbsp;usually&nbsp;induces&nbsp;a&nbsp;huge&nbsp;
&gt;performance&nbsp;degradation.
&gt;
&gt;In&nbsp;choosing&nbsp;the&nbsp;optimal&nbsp;number&nbsp;of&nbsp;nodes,&nbsp;processes&nbsp;per&nbsp;node,&nbsp;etc.,&nbsp;
&gt;several&nbsp;factors&nbsp;should&nbsp;be&nbsp;taken&nbsp;into&nbsp;account:&nbsp;memory&nbsp;requirements,&nbsp;
&gt;communication&nbsp;hardware,&nbsp;etc.&nbsp;You&nbsp;might&nbsp;want&nbsp;have&nbsp;a&nbsp;look&nbsp;to&nbsp;this&nbsp;page&nbsp;
&gt;from&nbsp;the&nbsp;user&nbsp;guide:&nbsp;<A href="http://www.quantum-espresso.org/user_guide/node33.html">http://www.quantum-espresso.org/user_guide/node33.html</A></PRE><PRE>8 processes per node in our cluster,</PRE><PRE>model name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : Intel(R) Xeon(R) CPU&nbsp;&nbsp;&nbsp; E5410&nbsp; @ 2.33GHz<BR>stepping&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 10<BR>cpu MHz&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 2327.489<BR>cache size&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 6144 KB</PRE><PRE><BR>&gt;Also,&nbsp;consider&nbsp;that,&nbsp;at&nbsp;least&nbsp;for&nbsp;not&nbsp;very&nbsp;very&nbsp;recent&nbsp;CPU&nbsp;generation,&nbsp;</PRE>
&gt;using&nbsp;too&nbsp;many&nbsp;cores&nbsp;per&nbsp;CPU&nbsp;(e.g.&nbsp;if&nbsp;your&nbsp;cluster&nbsp;configuration&nbsp;is&nbsp;with&nbsp;
&gt;quad-core&nbsp;processors),&nbsp;might&nbsp;not&nbsp;improve&nbsp;(maybe&nbsp;also&nbsp;make&nbsp;worse)&nbsp;the&nbsp;
&gt;code&nbsp;performances&nbsp;(this&nbsp;is&nbsp;also&nbsp;reported&nbsp;in&nbsp;previous&nbsp;threads&nbsp;in&nbsp;this&nbsp;
&gt;forum,&nbsp;you&nbsp;can&nbsp;make&nbsp;a&nbsp;search).&gt;
&gt;Also&nbsp;this&nbsp;can&nbsp;be&nbsp;of&nbsp;interest&nbsp;for&nbsp;you:
&gt;http://www.quantum-espresso.org/wiki/index.php/Frequently_Asked_Questions#Why_is_my_parallel_job_running_in_such_a_lousy_way.3F<PRE>I am not the supperuser,I don't know how to Set the environment variable OPEN_MP_THREADS to 1,I can't find where is OPEN_MP_THREADS ?</PRE><PRE>
&gt;
&gt;&gt;&nbsp;I&nbsp;don't&nbsp;know&nbsp;why&nbsp;it&nbsp;run&nbsp;so&nbsp;slow&nbsp;,how&nbsp;to&nbsp;solve&nbsp;this&nbsp;problem&nbsp;?&nbsp;Any&nbsp;
&gt;&gt;&nbsp;advice&nbsp;will&nbsp;be&nbsp;appreciated&nbsp;!
&gt;Apart&nbsp;from&nbsp;better&nbsp;suggestions&nbsp;coming&nbsp;from&nbsp;more&nbsp;expert&nbsp;people,&nbsp;it&nbsp;would&nbsp;
&gt;be&nbsp;important&nbsp;to&nbsp;see&nbsp;what&nbsp;kind&nbsp;of&nbsp;job&nbsp;you&nbsp;are&nbsp;trying&nbsp;to&nbsp;run.&nbsp;For&nbsp;example:&nbsp;
&gt;did&nbsp;you&nbsp;start&nbsp;directly&nbsp;with&nbsp;a&nbsp;"production&nbsp;run"&nbsp;(many&nbsp;k-points&nbsp;and/or&nbsp;
&gt;large&nbsp;unit&nbsp;cells&nbsp;and/or&nbsp;large&nbsp;cut-off)?&nbsp;Did&nbsp;pw.x&nbsp;ever&nbsp;run&nbsp;on&nbsp;your&nbsp;
&gt;cluster&nbsp;with&nbsp;simple&nbsp;jobs,&nbsp;like&nbsp;bulk&nbsp;silicon&nbsp;or&nbsp;any&nbsp;other&nbsp;(see&nbsp;the&nbsp;
&gt;examples&nbsp;directory)?</PRE><PRE>The input file I had run on my single computer(4 CPUs).It runs well .</PRE><PRE>
&gt;
&gt;Another&nbsp;possibility&nbsp;would&nbsp;be&nbsp;starting&nbsp;with&nbsp;the&nbsp;serial&nbsp;executable&nbsp;
&gt;(disabling&nbsp;parallel&nbsp;at&nbsp;configure&nbsp;time)&nbsp;and&nbsp;then&nbsp;switch&nbsp;to&nbsp;parallel&nbsp;once&nbsp;
&gt;you&nbsp;check&nbsp;that&nbsp;everything&nbsp;is&nbsp;working&nbsp;OK.
&gt;
&gt;
&gt;
&gt;Unfortunately,&nbsp;in&nbsp;many&nbsp;cases&nbsp;the&nbsp;computation&nbsp;requires&nbsp;lot&nbsp;of&nbsp;work&nbsp;to&nbsp;
&gt;correctly&nbsp;set-up&nbsp;and&nbsp;optimize&nbsp;compilation,&nbsp;performances,&nbsp;etc.&nbsp;(not&nbsp;to&nbsp;
&gt;speak&nbsp;about&nbsp;results&nbsp;convergence&nbsp;issues!!!!).
&gt;The&nbsp;only&nbsp;way&nbsp;is&nbsp;trying&nbsp;to&nbsp;isolate&nbsp;problems&nbsp;and&nbsp;solve&nbsp;one&nbsp;by&nbsp;one.&nbsp;Yet,&nbsp;I&nbsp;
&gt;would&nbsp;say&nbsp;that&nbsp;in&nbsp;this&nbsp;respect&nbsp;quantum-espresso&nbsp;is&nbsp;one&nbsp;of&nbsp;the&nbsp;best&nbsp;
&gt;choices,&nbsp;being&nbsp;the&nbsp;code&nbsp;made&nbsp;to&nbsp;properly&nbsp;work&nbsp;in&nbsp;as&nbsp;many&nbsp;cases&nbsp;as&nbsp;
&gt;possible,&nbsp;rather&nbsp;then&nbsp;implementing&nbsp;all&nbsp;the&nbsp;human&nbsp;knowledge&nbsp;but&nbsp;just&nbsp;for&nbsp;
&gt;those&nbsp;who&nbsp;wrote&nbsp;it!!!
&gt;;-)
&gt;
&gt;Good&nbsp;luck,
&gt;
&gt;Giovanni
&gt;
&gt;
&gt;--&nbsp;
&gt;
&gt;
&gt;
&gt;Dr.&nbsp;Giovanni&nbsp;Cantele
&gt;Coherentia&nbsp;CNR-INFM&nbsp;and&nbsp;Dipartimento&nbsp;di&nbsp;Scienze&nbsp;Fisiche
&gt;Universita'&nbsp;di&nbsp;Napoli&nbsp;"Federico&nbsp;II"
&gt;Complesso&nbsp;Universitario&nbsp;di&nbsp;Monte&nbsp;S.&nbsp;Angelo&nbsp;-&nbsp;Ed.&nbsp;6
&gt;Via&nbsp;Cintia,&nbsp;I-80126,&nbsp;Napoli,&nbsp;Italy
&gt;Phone:&nbsp;+39&nbsp;081&nbsp;676910
&gt;Fax:&nbsp;&nbsp;&nbsp;+39&nbsp;081&nbsp;676346
&gt;E-mail:&nbsp;giovanni.cantele@cnr.it
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;giovanni.cantele@na.infn.it
&gt;Web:&nbsp;http://people.na.infn.it/~cantele
&gt;Research&nbsp;Group:&nbsp;http://www.nanomat.unina.it
&gt;Skype&nbsp;contact:&nbsp;giocan74
&gt;
&gt;
&gt;
&gt;------------------------------
&gt;
&gt;Message:&nbsp;3
&gt;Date:&nbsp;Wed,&nbsp;23&nbsp;Sep&nbsp;2009&nbsp;10:50:48&nbsp;+0200
&gt;From:&nbsp;"Lorenzo&nbsp;Paulatto"&nbsp;&lt;paulatto@sissa.it&gt;
&gt;Subject:&nbsp;Re:&nbsp;[Pw_forum]&nbsp;how&nbsp;to&nbsp;improve&nbsp;the&nbsp;calculation&nbsp;speed&nbsp;?
&gt;To:&nbsp;Giovanni.Cantele@na.infn.it,&nbsp;"PWSCF&nbsp;Forum"&nbsp;&lt;pw_forum@pwscf.org&gt;
&gt;Message-ID:&nbsp;&lt;op.u0pb6yqfa8x26q@paulax&gt;
&gt;Content-Type:&nbsp;text/plain;&nbsp;charset=utf-8;&nbsp;format=flowed;&nbsp;delsp=yes
&gt;
&gt;In&nbsp;data&nbsp;23&nbsp;settembre&nbsp;2009&nbsp;alle&nbsp;ore&nbsp;10:45:51,&nbsp;Giovanni&nbsp;Cantele&nbsp;&nbsp;
&gt;&lt;Giovanni.Cantele@na.infn.it&gt;&nbsp;ha&nbsp;scritto:
&gt;&gt;&nbsp;I'm&nbsp;not&nbsp;very&nbsp;expert&nbsp;about&nbsp;reading&nbsp;such&nbsp;information,&nbsp;but&nbsp;it&nbsp;seams&nbsp;that
&gt;&gt;&nbsp;your&nbsp;node&nbsp;is&nbsp;making&nbsp;swap,&nbsp;maybe&nbsp;because&nbsp;the&nbsp;job&nbsp;is&nbsp;requiring&nbsp;too&nbsp;much
&gt;&gt;&nbsp;memory&nbsp;with&nbsp;respect&nbsp;to&nbsp;the&nbsp;available&nbsp;one.&nbsp;This&nbsp;usually&nbsp;induces&nbsp;a&nbsp;huge
&gt;&gt;&nbsp;performance&nbsp;degradation.
&gt;
&gt;If&nbsp;this&nbsp;is&nbsp;the&nbsp;case,&nbsp;reducing&nbsp;the&nbsp;number&nbsp;of&nbsp;pools&nbsp;will&nbsp;reduce&nbsp;the&nbsp;amount&nbsp;&nbsp;
&gt;of&nbsp;memory&nbsp;required&nbsp;per&nbsp;node.
&gt;
&gt;cheers
&gt;
&gt;
&gt;--&nbsp;
&gt;Lorenzo&nbsp;Paulatto
&gt;SISSA&nbsp;&nbsp;&amp;&nbsp;&nbsp;DEMOCRITOS&nbsp;(Trieste)
&gt;phone:&nbsp;+39&nbsp;040&nbsp;3787&nbsp;511
&gt;skype:&nbsp;paulatz
&gt;www:&nbsp;&nbsp;&nbsp;http://people.sissa.it/~paulatto/
&gt;
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;***&nbsp;save&nbsp;italian&nbsp;brains&nbsp;***
&gt;&nbsp;&nbsp;http://saveitalianbrains.wordpress.com/
&gt;
&gt;
&gt;------------------------------
&gt;
&gt;Message:&nbsp;4
&gt;Date:&nbsp;Wed,&nbsp;23&nbsp;Sep&nbsp;2009&nbsp;03:13:18&nbsp;-0700&nbsp;(PDT)
&gt;From:&nbsp;ali&nbsp;kazempour&nbsp;&lt;kazempoor2000@yahoo.com&gt;
&gt;Subject:&nbsp;[Pw_forum]&nbsp;write&nbsp;occupancy
&gt;To:&nbsp;pw&nbsp;&lt;pw_forum@pwscf.org&gt;
&gt;Message-ID:&nbsp;&lt;432077.46189.qm@web112513.mail.gq1.yahoo.com&gt;
&gt;Content-Type:&nbsp;text/plain;&nbsp;charset="us-ascii"
&gt;
&gt;Hi
&gt;How&nbsp;do&nbsp;we&nbsp;can&nbsp;force&nbsp;the&nbsp;code&nbsp;to&nbsp;print&nbsp;the&nbsp;occupancy&nbsp;in&nbsp;simple&nbsp;scf&nbsp;run?
&gt;I&nbsp;know&nbsp;partial&nbsp;dos&nbsp;calculation&nbsp;,&nbsp;but&nbsp;I&nbsp;don't&nbsp;know&nbsp;wheather&nbsp;another&nbsp;way&nbsp;also&nbsp;exist&nbsp;or&nbsp;not?
&gt;thanks&nbsp;a&nbsp;lot
&gt;
&gt;&nbsp;Ali&nbsp;Kazempour
&gt;Physics&nbsp;department,&nbsp;Isfahan&nbsp;University&nbsp;of&nbsp;Technology
&gt;84156&nbsp;Isfahan,&nbsp;Iran.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Tel-1:&nbsp;&nbsp;+98&nbsp;311&nbsp;391&nbsp;3733
&gt;Fax:&nbsp;+98&nbsp;311&nbsp;391&nbsp;2376&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Tel-2:&nbsp;&nbsp;+98&nbsp;311&nbsp;391&nbsp;2375
&gt;
&gt;
&gt;
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
&gt;--------------&nbsp;next&nbsp;part&nbsp;--------------
&gt;An&nbsp;HTML&nbsp;attachment&nbsp;was&nbsp;scrubbed...
&gt;URL:&nbsp;http://www.democritos.it/pipermail/pw_forum/attachments/20090923/97a9ee58/attachment-0001.htm&nbsp;
&gt;
&gt;------------------------------
&gt;
&gt;Message:&nbsp;5
&gt;Date:&nbsp;Wed,&nbsp;23&nbsp;Sep&nbsp;2009&nbsp;12:46:45&nbsp;+0200
&gt;From:&nbsp;Prasenjit&nbsp;Ghosh&nbsp;&lt;prasenjit.jnc@gmail.com&gt;
&gt;Subject:&nbsp;Re:&nbsp;[Pw_forum]&nbsp;write&nbsp;occupancy
&gt;To:&nbsp;PWSCF&nbsp;Forum&nbsp;&lt;pw_forum@pwscf.org&gt;
&gt;Message-ID:
&gt;        &lt;627e0ffa0909230346wbdf3399i1b2a48f4edfa9c65@mail.gmail.com&gt;
&gt;Content-Type:&nbsp;text/plain;&nbsp;charset="iso-8859-1"
&gt;
&gt;use&nbsp;verbosity='high'
&gt;
&gt;Prasenjit.
&gt;
&gt;2009/9/23&nbsp;ali&nbsp;kazempour&nbsp;&lt;kazempoor2000@yahoo.com&gt;
&gt;
&gt;&gt;&nbsp;Hi
&gt;&gt;&nbsp;How&nbsp;do&nbsp;we&nbsp;can&nbsp;force&nbsp;the&nbsp;code&nbsp;to&nbsp;print&nbsp;the&nbsp;occupancy&nbsp;in&nbsp;simple&nbsp;scf&nbsp;run?
&gt;&gt;&nbsp;I&nbsp;know&nbsp;partial&nbsp;dos&nbsp;calculation&nbsp;,&nbsp;but&nbsp;I&nbsp;don't&nbsp;know&nbsp;wheather&nbsp;another&nbsp;way&nbsp;also
&gt;&gt;&nbsp;exist&nbsp;or&nbsp;not?
&gt;&gt;&nbsp;thanks&nbsp;a&nbsp;lot
&gt;&gt;
&gt;&gt;&nbsp;Ali&nbsp;Kazempour
&gt;&gt;&nbsp;Physics&nbsp;department,&nbsp;Isfahan&nbsp;University&nbsp;of&nbsp;Technology
&gt;&gt;&nbsp;84156&nbsp;Isfahan,&nbsp;Iran.&nbsp;Tel-1:&nbsp;+98&nbsp;311&nbsp;391&nbsp;3733
&gt;&gt;&nbsp;Fax:&nbsp;+98&nbsp;311&nbsp;391&nbsp;2376&nbsp;Tel-2:&nbsp;+98&nbsp;311&nbsp;391&nbsp;2375
&gt;&gt;
&gt;&gt;
&gt;&gt;&nbsp;_______________________________________________
&gt;&gt;&nbsp;Pw_forum&nbsp;mailing&nbsp;list
&gt;&gt;&nbsp;Pw_forum@pwscf.org
&gt;&gt;&nbsp;http://www.democritos.it/mailman/listinfo/pw_forum
&gt;&gt;
&gt;&gt;
&gt;
&gt;
&gt;--&nbsp;
&gt;PRASENJIT&nbsp;GHOSH,
&gt;POST-DOC,
&gt;ROOM&nbsp;NO:&nbsp;265,&nbsp;MAIN&nbsp;BUILDING,
&gt;CM&nbsp;SECTION,&nbsp;ICTP,
&gt;STRADA&nbsp;COSTERIA&nbsp;11,
&gt;TRIESTE,&nbsp;34104,
&gt;ITALY
&gt;PHONE:&nbsp;+39&nbsp;040&nbsp;2240&nbsp;369&nbsp;(O)
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;+39&nbsp;3807528672&nbsp;(M)
&gt;--------------&nbsp;next&nbsp;part&nbsp;--------------
&gt;An&nbsp;HTML&nbsp;attachment&nbsp;was&nbsp;scrubbed...
&gt;URL:&nbsp;http://www.democritos.it/pipermail/pw_forum/attachments/20090923/bc129707/attachment.htm&nbsp;
&gt;
&gt;------------------------------
&gt;
&gt;_______________________________________________
&gt;Pw_forum&nbsp;mailing&nbsp;list
&gt;Pw_forum@pwscf.org
&gt;http://www.democritos.it/mailman/listinfo/pw_forum
&gt;
&gt;
&gt;End&nbsp;of&nbsp;Pw_forum&nbsp;Digest,&nbsp;Vol&nbsp;27,&nbsp;Issue&nbsp;74
&gt;****************************************
</PRE><br><br><span title="neteasefooter"/><hr/>
<a href="http://news.163.com/madeinchina/index.html?from=mailfooter">"中国制造",讲述中国60年往事</a>
</span>