[FLASH-USERS] Typo in unsplit_rad Hydro_computeDT

Mark Richardson mark.richardson.work at gmail.com
Thu Mar 3 12:27:07 EST 2016


Hi Klaus, and others listening. Thanks for your help so far.

   I've investigated a bit further. Some confusing things are happening.

  I made a new build that should amount to the same initial conditions,
with density set everywhere to 4g/cc, tele=trad=100K, box on
[1.5e7,-3.e8,-1.5e8]:[6.15e8,3.e8,1.5e8], with lrefine_min=lrefine_max=4,
and 16=NXB=NYB=2NZB. To make sure it wasn't due to bad compiling with too
high an optimization, I had -O0 throughout for this new build (but -O3 for
previous communication).

  Looking at the physical quantities in the 0000 checkpoint file,
everything was the same with this new build compared with the old run
except for gpot. In the Original build posted before (which is based on my
existing code which then overwrote every cell at init with rho,T=4,100) I
was getting gpot= -3.06777770730863e+27 in the first cell and it doesn't
vary much (dgpot/gpot ~ 1.e-10).

  In my new build I was getting gpot=2760347382.52056 and varied highly
from cell to cell.

  A quick calculation of replacing each cell with a pointmass makes the
latter seems close to what I'd expect.

In both of these cases the gravitational boundary condition was set to
isolated. However I noticed that in my new build I had my x and y hydro
boundaries as periodic while in my original run I had all boundaries as
diode. So I reran the new build with all diode and now I get:
+5.97937076239123e26. This seemed really odd; why would changing the fluid
boundaries, but not the gravitational boundaries nor the density or cell
spacing, change the gravitational potential? Also, once the potential was
at this magnitude, then HYPRE was suffering from the non-convergence case
again.

I then reran my original run with -O0 (instead of -O3 which I had
originally) and now it also gets +5.97937076239123e26.

I then reran my new build with -O3 (instead of -O0) and now it gets
-3.06777770730863e+27.

So: Once everything was the same in the new build, then they agree, whether
with -O0 or -O3. However :

  1) Why would changing only the hydro boundary conditions impact gpot (and
by ~20 orders of magnitude).
  2) Has anyone else seen changing the optimization level impact the gpot
solution.
  3) Clearly these values are much larger than expected: as I said a simple
N-body calculation gets me around -1e11 for the potential. Why is this? And
does it explain the previous issues I was seeing with HYPRE large RHS?

Cheers,
   -Mark

PS:  Other relevant information:

Number of MPI tasks:                 32
 MPI version:                          3
 MPI subversion:                       0
 Dimensionality:                       3
 Max Number of Blocks/Proc:          400
 Number x zones:                      16
 Number y zones:                      16
 Number z zones:                       8
System info:
Linux login3.stampede.tacc.utexas.edu 2.6.32-431.17.1.el6.x86_64 #1 SMP Wed
May
 Version:
 FLASH 4.3_release


FLASH Units used:
   Driver/DriverMain/Unsplit
   Grid/GridAllocatableScratches
   Grid/GridBoundaryConditions
   Grid/GridMain/paramesh/interpolation/Paramesh4/prolong
   Grid/GridMain/paramesh/interpolation/prolong
   Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/headers
   Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/mpi_source
   Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/source

 Grid/GridMain/paramesh/paramesh4/Paramesh4dev/PM4_package/utilities/multigrid
   Grid/GridSolvers/HYPRE/paramesh
   Grid/GridSolvers/IsoBndMultipole
   Grid/GridSolvers/Multigrid/fft
   IO/IOMain/hdf5/serial/PM
   Multispecies/MultispeciesMain
   PhysicalConstants/PhysicalConstantsMain
   RuntimeParameters/RuntimeParametersMain
   Simulation/SimulationMain/ChondruleMappingMSMT_NoTill_ConstOpac
   flashUtilities/contiguousConversion
   flashUtilities/general
   flashUtilities/interpolation/oneDim
   flashUtilities/nameValueLL
   flashUtilities/sorting/quicksort
   flashUtilities/system/memoryUsage/legacy
   monitors/Logfile/LogfileMain
   monitors/Timers/TimersMain/MPINative
   physics/Diffuse/DiffuseMain/Unsplit
   physics/Eos/EosMain/multiTemp/MatRad3/Multigamma
   physics/Eos/EosMain/multiTemp/Multigamma
   physics/Gravity/GravityMain/Poisson/Multigrid
   physics/Hydro/HydroMain/unsplit_rad/Hydro_Unsplit
   physics/Hydro/HydroMain/unsplit_rad/multiTemp
   physics/RadTrans/RadTransMain/MGD
   physics/materialProperties/Conductivity/ConductivityMain/Constant
   physics/materialProperties/Opacity/OpacityMain/Constcm2g

Setup options: -auto -3d -nxb=16 -nyb=16 -nzb=8 -maxblocks=400
species=rock,watr +uhd3tr mgd_meshgroups=1


On 19 February 2016 at 00:15, Klaus Weide <klaus at flash.uchicago.edu> wrote:

> Hello Mark,
>
> Some further comments on your last message blow.
>
>
> On Mon, 15 Feb 2016, Mark Richardson wrote:
>
>  .....
> >
> > When I was filling the guard cells, this was accomplished in
> > Simulation_initBlock, using the exact same method as filling the rest of
> > the cells. Thus, using *gcell = .true., *the coordinates of all cells was
> > found with:
> >
> > ! get the integer index information for the current block
> >   call Grid_getBlkIndexLimits(blockId,blkLimits,blkLimitsGC)
> >
> >   ! Determine size of interior of block
> >   sizeX = blkLimitsGC(HIGH,IAXIS) - blkLimitsGC(LOW,IAXIS) + 1
> >   sizeY = blkLimitsGC(HIGH,JAXIS) - blkLimitsGC(LOW,JAXIS) + 1
> >   sizeZ = blkLimitsGC(HIGH,KAXIS) - blkLimitsGC(LOW,KAXIS) + 1
> >
> >   ! allocate memory for iCen variables
> >   allocate(xCen(sizeX),stat=istat)
> >   allocate(yCen(sizeY),stat=istat)
> >   allocate(zCen(sizeZ),stat=istat)
> >
> >   ! Initialize
> >   xCen = 0.0d0
> >   yCen = 0.0d0
> >   zCen = 0.0d0
> >
> >   ! Get interior cell coordinates
> >   if (NDIM == 3) &
> >     call Grid_getCellCoords(KAXIS, blockId, CENTER ,gcell, zCen, sizeZ)
> >   if (NDIM >= 2) &
> >     call Grid_getCellCoords(JAXIS, blockId, CENTER,gcell, yCen, sizeY)
> >   call Grid_getCellCoords(IAXIS, blockId, CENTER,gcell, xCen, sizeX)
> >
> >   ! Get lower and upper coordinates
> >   blkLow(1)   = xCen(1)
> >   blkHigh(1) = xCen(sizeX)
> >
> >   blkLow(2)   = yCen(1)
> >   blkHigh(2) = yCen(sizeY)
> >
> >   blkLow(3)   = zCen(1)
> >   blkHigh(3) = zCen(sizeZ)
> >
> > ...
> >
> > Then later, I fill each cell in the loop led with:
> >
> >   do k = blkLimitsGC(LOW,KAXIS),blkLimitsGC(HIGH,KAXIS)
> >      do j = blkLimitsGC(LOW,JAXIS),blkLimitsGC(HIGH,JAXIS)
> >         do i = blkLimitsGC(LOW,IAXIS),blkLimitsGC(HIGH,IAXIS)
>
> Yes, that looks correct (if gcell==.true.). However, ...
>
> > Originally *gcell* was *.false.* and the *size{X,Y,Z}* variables were set
> > by the non-GC versions, and then the *k,j,i* loops were over the non GC
> > versions. By switching to the GC version, the small CFL went away.
>
> When you replace blkLimitsGC with blkLimits everywhere (after the
> Grid_getBlkIndexLimits call), and set gcell=.false., to restore your
> "original" code, this may become wrong.  It depends on whether you use
> the xCen / yCen / zCen arrays anywhere in your loop, indexed by i / j / k.
> That's because
>
>  * with blkLimitsGC : xCen(1) corresponds to blkData(1,j,k), etc.
>  * with blkLimitsGC : xCen(1) corresponds to blkData(5,j,k), etc.
>    (assuming NGUARD=4).
>
> So you would have to replace any xCen(i) in the loop body with
> xCen(i-NGUARD), etc.
>
> (alternatively, change the bounds of the allocated coordiante arrays
> so that the numbering agrees with that used by the look indices i,j,k -
> I.e., replace
>
> >   allocate(xCen(sizeX),stat=istat)
>
> with
>
> >   allocate(xCen(blkLimits(LOW,IAXIS):blkLimits(HIGH,IAXIS)))
>
> etc. (and appropriate changes where you set your blkLow and blkHigh).
>
> Maybe this is obvious to you, but I thought it might be related to why
> your original version was misbehaving.
>
>
>
> > I'm having one other problem, which is maybe related. Originally I was
> > running this with a self-made Eos (Tillotson for planetary bodies, that
> > includes radiative transfer). I was getting the following errors
> > immediately after initialisation:
> >
> >  [ 02-15-2016  09:57:48.810 ] [gr_hypreInit]: HYPRE_RELEASE_NAME=hypre,
> > HYPRE_RELEASE_VERSION=2.10.0b, HYPRE_RELEASE_DATE=2015/01/22
>
> .....
>
> >  [ 02-15-2016  09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: ierr=
> >       0
> >  [ 02-15-2016  09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure:
> component=
> >           0
> >  [ 02-15-2016  09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure:
> converged=
> > F
> >  [ 02-15-2016  09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: |initial
> > RHS|^2=  1.2520E+35
> >  [ 02-15-2016  09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure:
> > num_iterations=         500
> >  [ 02-15-2016  09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure:
> > final_res_norm=   88143026.2676523
>
> Not sure what is going on here.
>
> Hopefully it's not the HYPRE version (I think we still haven't tested with
> 2.10.0b).
>
> The high |initial RHS^2| value may indicate that you are feeding HYPRE
> an unphysically large r.h.s.  (The B in A X = B); whether 1.2520E+35
> should be considered unphysical depenss on your problem (initial
> conditions,
> number of cells...)
>
>
> > Where hypre isn't converging. I'm not sure why this would be, as I
> > eventually tested it with a uniform density and temperature (with
> temp_ele
> > = temp_rad) for the MatRad3 case of no ions. Also, I'm surprised Flash
> > doesn't crash when non-convergence occurs in the hypre grid solver.
>
> Maybe FLASH should abort.  But sometimes we have failures in HYPRE to
> be transitory, and to disappear a few time steps.  In case of severe
> and persisting problems - they usually cause a serious blowup in a
> variable which will normally not go undetected for long.
>
> > I've now tested it with the MatRad3 Multigamma EOS, and get the same
> error,
> > so I'm unsure how this would be related to my EOS. Has anyone had similar
> > issues with the hypre grid solver?
> >
> > These may be the most relevant parts of my flash.par file:
> > # Flux control
> > hy_fPresInMomFlux = 1.0      # Percentage of pressure gradient for the
>
> You should set the above to 0.0 if you are using the unsplit_rad Hydro
> code.
> (If not, leave at at 1.0.)
>
> If you are using MatRad3 EOS but with the "unsplit" (not _rad) Hydro code,
> you are testing a combination that is MEANT to work, bu we haven't tested
> it much.
>
>
> > ppmEintFluxConstructionMeth     = 0
> > ppmEintCompFluxConstructionMeth = 0 #(or 1) #Advect like any old mass
> scalar
> > ppmEnerFluxConstructionMeth     = 0
>
> Those three dont't matter.
>
> Klaus
>



-- 

Mark Richardson
Beecroft Postdoctoral Fellow, Department of Physics
Denys Wilkinson Building - 555F
University of Oxford
Mark.Richardson at physics.ox.ac.uk
+44 1865 280793
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20160303/2ec18345/attachment.htm>


More information about the flash-users mailing list