[FLASH-USERS] FLASH Scaling Data

Hix, William Raphael raph at ornl.gov
Mon May 7 12:29:31 EDT 2018


For the record, most centers have a mechanism in place for new users to generate such scaling tests.  For example, this is one of the reasons why director’s discretionary time is available at many centers.

Raph

________________________________

From: Sean M. Couch <couch at pa.msu.edu>
Date: May 7, 2018 at 11:34:30 AM EDT
To: Jason Galyardt <jason.galyardt at gmail.com>, flash-users <flash-users at flash.uchicago.edu>
Subject: Re: [FLASH-USERS] FLASH Scaling Data

Hi Jason et al.,

To throw more fuel on the fire, I agree that major allocation programs will want performance data for your specific application on their specific resource. And I think of FLASH as a simulation _framework_, not a single code. I.e., there is an near-infinite number of applications that can be built from it with all the possible permutations of physics, solvers, grids, etc.

That said, as an example (and at the risk of revealing all of our top secrets. Gasp!) I’ve made our most recent successful INCITE proposal using FLASH public: https://github.com/smcouch/INCITE_2018. Of particular interest to you will be the computational readiness section where we show scaling on both Mira and Theta at ALCF to the entire platforms. Mira weak scaling plot attached. Now, these will not be directly applicable for you because they are for our specific neutrino transport+MHD application, and both of those solvers I wrote from scratch in FLASH and have not made public...yet :)

Hope this helps (you, and others)!

Sean



--------------------------------------------------------------------------------------------

Sean M. Couch

Assistant Professor

Department of Physics and Astronomy

Department of Computational Mathematics, Science, and Engineering

National Superconducting Cyclotron Laboratory/Facility for Rare Isotope Beams

Michigan State University

567 Wilson Rd, 3250 BPS

East Lansing, MI 48824

(517) 884-5035 —— couch at pa.msu.edu<mailto:couch at pa.msu.edu> —— www.pa.msu.edu/~couch<http://www.pa.msu.edu/~couch>

On May 7, 2018, 10:34 AM -0400, Messer, Bronson <bronson at ornl.gov>, wrote:
Let me reinforce Bob’s comments: As an example, for the DOE SC INCITE program, any proposal that referred to previous scaling results without
having particular scaling and performance data for the problem proposed would be deemed “not computationally ready.” Such a
proposal would be significantly disadvantaged relative to other proposals.

Bronson

On May 7, 2018, at 9:32 AM, Robert Fisher <rfisher1 at umassd.edu<mailto:rfisher1 at umassd.edu>> wrote:

Dear Jason :

  The gold standard is to include your own metrics on as similar a platform as possible -- ideally the one you are proposing time on. This is because scaling depends not just upon the code you are using, but also of course upon the hardware and software library stacks implemented upon it. As a result, hard experience teaches that just because a code can scan on one platform, does not necessarily mean it will scale on another platform, even with identical setups.

  That said, if the proposal is for a relatively small amount of time on a relatively small number of cores, and if the physics is primarily hydrodynamics, referring to the studies that Lynn mentions might suffice to convince your reviewers. However, even in those instances, I'd strongly recommend including your own statistics if at all possible, since it shows the review committee that you've done your homework. FLASH has built-in coarse-grained timing diagnostics output at the end of each complete run,
so it's not that that difficult to gather these.

  Best wishes,

  Bob

On Sun, May 6, 2018 at 11:05 PM Jason Galyardt <jason.galyardt at gmail.com<mailto:jason.galyardt at gmail.com>> wrote:
Hi all,

I'm working on an proposal for a compute time allocation at a major cluster facility. The proposal guidelines request quantitative evidence concerning the parallel performance, stability, and scalability of FLASH. Network and I/O bandwidth benchmarks are also requested. I realize that there is quite a bit of variability in these performance metrics according to the particular simulation, physics included, solver used, etc. However, I have seen some old (FLASH 2 era) scalability studies; how extensible are such studies? Is it necessary to profile one's own simulation for such proposals? If so, are there any recommended profiling tools / procedures, aside from those included in FLASH?

Thanks,

Jason

--------------
Jason Galyardt, PhD
University of Georgia
--
Dr. Robert Fisher
Associate Professor and Graduate Program Director
Physics Department
University of Massachusetts Dartmouth
285 Old Westport Road
North Dartmouth, Ma. 02740
http://www.novastella.org<http://www.novastella.org/>
robert.fisher at umassd.edu<mailto:robert.fisher at umassd.edu>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20180507/490ee9f2/attachment.htm>


More information about the flash-users mailing list