[FLASH-USERS] Typo in unsplit_rad Hydro_computeDT
Mark Richardson
mark.richardson.work at gmail.com
Mon Feb 15 14:27:13 EST 2016
Hi Klaus, thank you for your help.
OK, so this makes sense then why it was affecting us. We are doing a
rectangular grid, with NZB > NYB = NXB.
Regarding the small CFL time, I was not using a local cfl factor, nor did I
change the hy_cflStencil. Strange indeed!
When I was filling the guard cells, this was accomplished in
Simulation_initBlock, using the exact same method as filling the rest of
the cells. Thus, using *gcell = .true., *the coordinates of all cells was
found with:
! get the integer index information for the current block
call Grid_getBlkIndexLimits(blockId,blkLimits,blkLimitsGC)
! Determine size of interior of block
sizeX = blkLimitsGC(HIGH,IAXIS) - blkLimitsGC(LOW,IAXIS) + 1
sizeY = blkLimitsGC(HIGH,JAXIS) - blkLimitsGC(LOW,JAXIS) + 1
sizeZ = blkLimitsGC(HIGH,KAXIS) - blkLimitsGC(LOW,KAXIS) + 1
! allocate memory for iCen variables
allocate(xCen(sizeX),stat=istat)
allocate(yCen(sizeY),stat=istat)
allocate(zCen(sizeZ),stat=istat)
! Initialize
xCen = 0.0d0
yCen = 0.0d0
zCen = 0.0d0
! Get interior cell coordinates
if (NDIM == 3) &
call Grid_getCellCoords(KAXIS, blockId, CENTER ,gcell, zCen, sizeZ)
if (NDIM >= 2) &
call Grid_getCellCoords(JAXIS, blockId, CENTER,gcell, yCen, sizeY)
call Grid_getCellCoords(IAXIS, blockId, CENTER,gcell, xCen, sizeX)
! Get lower and upper coordinates
blkLow(1) = xCen(1)
blkHigh(1) = xCen(sizeX)
blkLow(2) = yCen(1)
blkHigh(2) = yCen(sizeY)
blkLow(3) = zCen(1)
blkHigh(3) = zCen(sizeZ)
...
Then later, I fill each cell in the loop led with:
do k = blkLimitsGC(LOW,KAXIS),blkLimitsGC(HIGH,KAXIS)
do j = blkLimitsGC(LOW,JAXIS),blkLimitsGC(HIGH,JAXIS)
do i = blkLimitsGC(LOW,IAXIS),blkLimitsGC(HIGH,IAXIS)
Originally *gcell* was *.false.* and the *size{X,Y,Z}* variables were set
by the non-GC versions, and then the *k,j,i* loops were over the non GC
versions. By switching to the GC version, the small CFL went away.
I'm having one other problem, which is maybe related. Originally I was
running this with a self-made Eos (Tillotson for planetary bodies, that
includes radiative transfer). I was getting the following errors
immediately after initialisation:
[ 02-15-2016 09:57:48.810 ] [gr_hypreInit]: HYPRE_RELEASE_NAME=hypre,
HYPRE_RELEASE_VERSION=2.10.0b, HYPRE_RELEASE_DATE=2015/01/22
[ 02-15-2016 09:57:48.812 ] memory: /proc vsize (MB): 2006.58
(min) 2008.35 (max) 2007.17 (avg)
[ 02-15-2016 09:57:48.813 ] memory: /proc rss (MB): 1284.28
(min) 1286.47 (max) 1285.21 (avg)
[ 02-15-2016 09:57:48.815 ] memory: /proc vsize (MB): 2006.58
(min) 2008.35 (max) 2007.17 (avg)
[ 02-15-2016 09:57:48.816 ] memory: /proc rss (MB): 1284.37
(min) 1286.56 (max) 1285.29 (avg)
[ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: ierr=
0
[ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: component=
0
[ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: converged=
F
[ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: |initial
RHS|^2= 1.2520E+35
[ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure:
num_iterations= 500
[ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure:
final_res_norm= 88143026.2676523
[ 02-15-2016 09:57:58.841 ] memory: /proc vsize (MB): 2022.99
(min) 2028.98 (max) 2025.35 (avg)
[ 02-15-2016 09:57:58.843 ] memory: /proc rss (MB): 1307.27
(min) 1318.30 (max) 1313.13 (avg)
[ 02-15-2016 09:57:59.569 ] [IO_writeCheckpoint] open: type=checkpoint
name=Run_8000_Lmax6_RT_MG_hdf5_chk_0000
[ 02-15-2016 09:58:00.982 ] [IO_writeCheckpoint] close: type=checkpoint
name=Run_8000_Lmax6_RT_MG_hdf5_chk_0000
[ 02-15-2016 09:58:00.987 ] [IO_writePlotfile] open: type=plotfile
name=Run_8000_Lmax6_RT_MG_hdf5_plt_cnt_0000
[ 02-15-2016 09:58:01.240 ] [IO_writePlotfile] close: type=plotfile
name=Run_8000_Lmax6_RT_MG_hdf5_plt_cnt_0000
[ 02-15-2016 09:58:01.315 ] memory: /proc vsize (MB): 2022.99
(min) 2028.98 (max) 2025.48 (avg)
[ 02-15-2016 09:58:01.320 ] memory: /proc rss (MB): 1307.77
(min) 1321.35 (max) 1313.81 (avg)
[ 02-15-2016 09:58:01.322 ] [Driver_evolveFlash]: Entering evolution loop
[ 02-15-2016 09:58:01.323 ] step: n=1 t=0.000000E+00 dt=1.000000E-10
[ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: ierr=
0
[ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: component=
0
[ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: converged=
F
[ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: |initial
RHS|^2= 1.2520E+35
[ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure:
num_iterations= 500
[ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure:
final_res_norm= 7426997.22777419
[ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: ierr=
0
[ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: group no=
1
[ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: converged=
F
[ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: |initial
RHS|^2= 1.2520E+35
[ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure:
num_iterations= 500
[ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure:
final_res_norm= 7426997.22777419
[ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: ierr=
0
[ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: group no=
1
[ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: converged=
F
[ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: |initial
RHS|^2= 1.2520E+35
[ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure:
num_iterations= 500
[ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure:
final_res_norm= 485.264327133202
[ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: ierr=
0
[ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: group no=
1
[ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: converged=
F
[ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: |initial
RHS|^2= 1.2520E+35
[ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure:
num_iterations= 500
[ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure:
final_res_norm= 485.264327133202
[ 02-15-2016 09:58:43.752 ] step: n=2 t=2.000000E-10 dt=1.000000E-10
Where hypre isn't converging. I'm not sure why this would be, as I
eventually tested it with a uniform density and temperature (with temp_ele
= temp_rad) for the MatRad3 case of no ions. Also, I'm surprised Flash
doesn't crash when non-convergence occurs in the hypre grid solver.
I've now tested it with the MatRad3 Multigamma EOS, and get the same error,
so I'm unsure how this would be related to my EOS. Has anyone had similar
issues with the hypre grid solver?
These may be the most relevant parts of my flash.par file:
# Flux control
hy_fPresInMomFlux = 1.0 # Percentage of pressure gradient for the
momentum equation that is handled in momentum fluxes
ppmEintFluxConstructionMeth = 0
ppmEintCompFluxConstructionMeth = 0 #(or 1) #Advect like any old mass scalar
ppmEnerFluxConstructionMeth = 0
# EOS PARAMETERS
eosMode = "dens_ie_gather"
eosModeInit = "dens_ie_gather"
hy_eosModeGc = "dens_ie_gather"
hy_eosModeAfter = "dens_ie_gather"
# Boundary conditions
# FLUIDS
xl_boundary_type = "diode"
xr_boundary_type = "diode"
yl_boundary_type = "diode"
yr_boundary_type = "diode"
zl_boundary_type = "diode"
zr_boundary_type = "diode"
# NOTE: FLASH4 doesn't have the best set of radiation boundary
# conditions available. They are 'reflecting/neumann', 'dirichlet', and
'vacuum'
# 'neumann' or 'reflecting' : zero flux at boundary
# 'dirichlet' : fixed value at the boundary (set below as
radiation temperature)
# 'vacuum' : flux across boundary is twice the boundary
energy value
rt_mgdXlBoundaryType = "reflecting"
rt_mgdXrBoundaryType = "reflecting"
rt_mgdYlBoundaryType = "reflecting"
rt_mgdYrBoundaryType = "reflecting"
rt_mgdZlBoundaryType = "reflecting"
rt_mgdZrBoundaryType = "reflecting"
Thank you again for your help,
-Mark
On 9 February 2016 at 17:43, Klaus Weide <klaus at flash.uchicago.edu> wrote:
> On Mon, 8 Feb 2016, Mark Richardson wrote:
>
> > > I'm pretty sure there's a bug in the unsplit_Rad Hydro_computeDT:
> > > source/physics/Hydro/HydroMain/unsplit_rad/Hydro_computeDt.F90
> > >
> > > Line 241 reads:
> > > 241: if (NDIM > 2) dt_ltemp =
> > > max(dt_ltemp,(abs(U(VELZ_VAR,i,j,k)-uzgrid(j))+sqrt(cfz2))*delzinv)
> > >
> > > But should be:
> > > 241: if (NDIM > 2) dt_ltemp =
> > > max(dt_ltemp,(abs(U(VELZ_VAR,i,j,k)-uzgrid(k))+sqrt(cfz2))*delzinv)
>
> Hello Mark,
>
> Agreed on the bug. It also affects the version of Hydro_computeDt in
> source/physics/Hydro/HydroMain/unsplit/Hydro_computeDt.F90.
> This will be fixed in the next release.
>
> As long as as the arrays uxgrid/uygrid/uzgrid are zero-filled, which is
> always the case in the usual calls Hydro_computeDt by other FLASH code,
> this bug had the potential to cause problems only when the following
> conditions were satisfied:
> (1) 3D setup with unsplit Hydro or MHD;
> (2) non-cubic (logical) block sizes - specifically, NZB > NYB.
>
Thank you for reporting the bug.
>
> > > However, I was getting the error:
> > > ***********************************************************
> > > Warning: The initial timestep is too large.
> > > initial timestep = 1.000000000000000E-010
> > > CFL timestep = 5.215016855876254E-303
> > > Resetting dtinit to dr_tstepSlowStartFactor*dtcfl.
> > > ***********************************************************
> > > both with and without the fix above. With the fix, I dug deeper into
> why
> > > the dt was going to zero, and found that it related to guard cells. I
> then
> > > also initialized my guard cells in Simulation_initBlock and the small
> dt
> > > problem went away.
>
> This is a bit mysterious. Hydro_computeDt should not depend on any values
> in the guard cells of the current block at all, unless BOTH of the
> following are true: (A) A local CFL factor is being used (i.e., CFL_VAR
> is defined); and (B) you have changed the Fortran parameter hy_cflStencil
> in Hydro_data from 0 to something > 0. Have you made any modifications?
>
> Where and how did you fill guard cells at initialization?
>
> The following assumes that you are using PARAMESH as your Grid
> implementation (not UG, not PARAMESH2).
>
> In general, initializing guard cells in your Simulation_initBlock should
> not be necessary (but cannot hurt either). The logic for calling
> Simulation_initBlock and then calling Eos and potentially filling guard
> cells (indirectly, via Grid_markRefineDerefine) is in gr_expandDomain.F90,
> q.v. The first calls to Hydro_computeDt come from Driver_verifyInitDt.
> It is indeed possible that at the point when Driver_verifyInitDt calls
> Hydro_computeDt, guard cells have NOT been properly filled or updated,
> although often they will have been updated as an effect of what happened
> in gr_expandDomain. But again, Hydro_computedDt should not normally need
> them (except when (A)&&(B) apply, see above); that's why the
> call Grid_fillGuardCells
> in Driver_verifyInitDt.F90 is commented out.
>
> > > Oddly, a collaborator took my SimulationMain/Code/ and ran it, and got
> the
> > > small dt error again, even with the guardCells initialized. I forgot to
> > > give him the above fix to Hydro_computeDT. He made that fix, and now
> his
> > > small dt problem is gone as well.
> > >
> > > I'm using the hypre unsplit solver, with radiation and mgd on. Any
> > > thoughts on why an intiial FillGuardCells call isn't fixing the
> problem?
>
>
--
Mark Richardson
Beecroft Postdoctoral Fellow, Department of Physics
Denys Wilkinson Building - 555F
University of Oxford
Mark.Richardson at physics.ox.ac.uk
+44 1865 280793
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20160215/0f831f20/attachment-0001.htm>
More information about the flash-users
mailing list