<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">Hi Bill,<div><br></div><div>I’ve thought about this and gotten as far as reading the section on the Phi’s in the Stampede user’s guide. Haven’t actually tried it yet. I’m co-PI of an XSEDE allocation on Stampede and if I can get them working that could mean more FLOPS per service unit and thus more science! </div><div><br></div><div>If I, or someone I know, gives the Phi’s a try with FLASH I will let you know.</div><div><br></div><div>Sean</div><div><br><div apple-content-edited="true">
<div style="color: rgb(0, 0, 0); letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><div style="color: rgb(0, 0, 0); letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><div style="color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><span class="Apple-style-span" style="color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; border-collapse: separate; border-spacing: 0px; -webkit-text-decorations-in-effect: none;"><div><br class="Apple-interchange-newline">--------------------------------------------------------</div><div>Sean M. Couch, Ph.D.</div>Flash Center for Computational Science<div>Department of Astronomy & Astrophysics</div><div>The University of Chicago</div><div>5747 S Ellis Ave, Jo 315</div><div>Chicago, IL 60637</div><div>(773) 702-3899 - office</div><div>(512) 997-8025 - cell</div><div><a href="http://www.flash.uchicago.edu/~smc">www.flash.uchicago.edu/~smc</a></div><div><br></div></span></div><br class="Apple-interchange-newline"></div><br class="Apple-interchange-newline"></div><br class="Apple-interchange-newline"><br class="Apple-interchange-newline">
</div>
<br><div><div>On Sep 18, 2014, at 8:51 AM, Bill Barth <<a href="mailto:bbarth@tacc.utexas.edu">bbarth@tacc.utexas.edu</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">Thanks for this great description, Sean. Given that FLASH has hybrid<br>OpenMP/MPI for most of the physics, have you tried running this in<br>symmetric mode on the Phis on Stampede? I think it'd be a great study. I<br>suspect you'd want to turn off I/O entirely on the MICs, but we'd be<br>interested in talking about some of the options there as well.<br><br>Thanks,<br>Bill.<br>--<br>Bill Barth, Ph.D., Director, HPC<br><a href="mailto:bbarth@tacc.utexas.edu">bbarth@tacc.utexas.edu</a><span class="Apple-converted-space"> </span> | Phone: (512) 232-7069<br>Office: ROC 1.435 | Fax: (512) 475-9445<br><br><br><br><br><br><br><br>On 9/18/14, 8:43 AM, "Sean Couch" <<a href="mailto:smc@flash.uchicago.edu">smc@flash.uchicago.edu</a>> wrote:<br><br><blockquote type="cite">Sure thing. <br><br><br>This was for my custom core-collapse supernova application.*<br>Functionally, it is nearly identical to the CCSN application<br>(source/Simulation/SimulationMain/CCSN) packaged with the latest release<br>of FLASH (v4.2.2), except that I¹m using MHD rather than<br>plain hydro. This setup uses the unsplit staggered mesh MHD solver,<br>detailed mircrophysical tabular EOS (source/physics/Eos/EosMain/Nuclear),<br>the new multipole gravity solver (source/Grid/GridSolvers/Multipole_new,<br>Couch, Graziani, & Flocke, 2013, ApJ, 778,<br>181), approximate neutrino transport via a leakage scheme<br>(source/physics/RadTrans/NeutrinoLeakage), and AMR via PARAMESH.<br><br><br>The scaling study was done on BG/Q Mira at Argonne Leadership Computing<br>Facility. To control the number of AMR blocks per core, I use a custom<br>version of Grid_markRefineDerefine.F90 that forces refinement up to the<br>maximum level within a runtime-specified<br>radius. This test employed hybrid parallelism with AMR blocks<br>distributed amongst the MPI ranks and OpenMP threading<br>within block (i.e., the i,j,k loops are threaded). I used 24^3 zones per<br>block (this reduces the fractional memory overhead of guardcells and the<br>communication per rank per step). This application strong scales like a<br>champ (Fig. 1 below), being fairly<br>efficient down to ~4 AMR blocks per MPI rank.<br><br><br>On hardware, Mira is a BG/Q with 16 cores per node, 1 GB memory per core,<br>and capable of 4 hardware threads per core. My application clocks in at<br>memory usage per rank of about 1200 MB (large EOS table, MHD has lots of<br>extra face variables, scratch arrays,<br>and my application defines a number of new grid variables). Thus, I<br>have to run 8 MPI ranks per node in order to fit in memory. I therefore<br>run with 8 OpenMP threads per MPI rank. This is not ideal; not every<br>part of FLASH is threaded (I¹m looking at you,<br>Grid, IOŠ). The heavy-lifting physics routines are threaded and with<br>24^3 zones per block and within-block threading, the thread-to-thread<br>speedup is acceptable even up to 8 threads.<br><br><br>The big run (32,768 nodes, 524,288 cores) had 2,097,152 leaf blocks (~29<br>billion zones), 2,396,744 total blocks, and used 262,144 MPI ranks (thus<br>8 leaf blocks per MPI rank).<br><br><br>Note that Mira has an extremely fast communication fabric! YMMV on other<br>systems. I have run a much smaller weak scaling study on TACC Stampede<br>up to 4096 cores and it is also essentially perfect, but I have yet to go<br>to any significant core count on<br>Stampede (see Fig. 2).<br><br><br>Hope this is helpful and informative!<br><br><br>Sean<br><br><br><br><br><br><br><br><br><br>Fig. 1 - Strong scaling of core-collapse SN application on Mira<br><br><br><br><br><br><br><br><br><br>Fig. 2 - Weak scaling of FLASH CCSN application on TACC Stampede<br><br><br>* - This particular simulation was started substantially ³post-bounce² in<br>the parlance of the CCSN community. Thus the shock was at a moderate<br>radius and the neutrino leakage treatment was active. The initial<br>progenitor model package with FLASH¹s CCSN<br>application is at the pre-bounce, collapse phase. Therefore, if you<br>want to run this scaling test yourself, you will have to generate<br>post-bounce initial conditions by running the 1D CCSN application to an<br>adequate post-bounce time, then converting those<br>1D results into the ASCII format used by the 1D initial conditions<br>reader.<br><br><br><br>--------------------------------------------------------<br>Sean M. Couch, Ph.D.<br>Flash Center for Computational Science<br>Department of Astronomy & Astrophysics<br>The University of Chicago<br>5747 S Ellis Ave, Jo 315<br>Chicago, IL 60637<br>(773) 702-3899 - office<br>(512) 997-8025 - cell<br><a href="http://www.flash.uchicago.edu/~smc">www.flash.uchicago.edu/~smc</a><span class="Apple-converted-space"> </span><<a href="http://www.flash.uchicago.edu/~smc">http://www.flash.uchicago.edu/~smc</a>><br><br><br><br><br><br><br><br><br><br><br><br>On Sep 18, 2014, at 5:23 AM, Richard Bower <<a href="mailto:r.g.bower@durham.ac.uk">r.g.bower@durham.ac.uk</a>> wrote:<br><br><br><br>I'm very keen to see this too (although I've not been running anything<br>big with flash)... could you say something about the memory per<br>core/node? This could be very useful for our next procurement... Richard<br><br><br>On 18 Sep 2014, at 07:46, Stefanie Walch wrote:<br><br>Hi Sean,<br><br>Could you tell me which setup you used for the nice scaling plot you sent<br>around?<br><br>Cheers,<br>Stefanie<br>===================================<br>Prof. Dr. Stefanie Walch<br>Physikalisches Institut I<br>Universität zu Köln<br>Zülpicher Straße 77<br>50937 Köln<br>Germany<br>email:<span class="Apple-converted-space"> </span><a href="mailto:walch@ph1.uni-koeln.de">walch@ph1.uni-koeln.de</a><br>phone: +49 (0) 221 4703497<br><br>On 17 Sep 2014, at 20:41, Sean Couch <<a href="mailto:smc@flash.uchicago.edu">smc@flash.uchicago.edu</a>> wrote:<br><br>For fun, let¹s play throwdown. Can anybody beat 525k cores (2 million<br>threads of execution)? See attached (1 Mira node = 16 cores).<br><br>Sean<br><br><wkScaling.pdf><br><br>--------------------------------------------------------<br>Sean M. Couch<br>Flash Center for Computational Science<br>Department of Astronomy & Astrophysics<br>The University of Chicago<br>5747 S Ellis Ave, Jo 315<br>Chicago, IL 60637<br>(773) 702-3899 - office<br><a href="http://www.flash.uchicago.edu/~smc">www.flash.uchicago.edu/~smc</a><span class="Apple-converted-space"> </span><<a href="http://www.flash.uchicago.edu/~smc">http://www.flash.uchicago.edu/~smc</a>><br><br><br><br><br><br><br>On Sep 17, 2014, at 1:30 PM, Rodrigo Fernandez <<a href="mailto:rafernan@berkeley.edu">rafernan@berkeley.edu</a>><br>wrote:<br><br>Dear FLASH Users/Developers,<br><br>Does anybody know the maximum number of cores that FLASH has ever been<br>run successfully with? Any reference for this? I need the information for<br>a computing proposal.<br><br>Thanks!<br><br>Rodrigo<br><br><br><br><br><br><br><br><br><br><br><br><br>--------------------------------------------------------------------------<br>----------------------------<br>Prof. Richard Bower Institute for<br>Computational Cosmology<br><br>University of Durham<br>+44-191-3343526 <a href="mailto:r.g.bower@durham.ac.uk">r.g.bower@durham.ac.uk</a><br>--------------------------------------------------------------------------<br>----------------------------</blockquote></div></blockquote></div><br></div></body></html>