<div dir="ltr"><div class="gmail_extra">Hi Klaus, thank you for your help.</div><div class="gmail_extra"><br></div><div class="gmail_extra"><div>OK, so this makes sense then why it was affecting us. We are doing a rectangular grid, with NZB > NYB = NXB. </div><div><br></div><div><br></div><div>Regarding the small CFL time, I was not using a local cfl factor, nor did I change the hy_cflStencil. Strange indeed!</div><div><br></div><div>When I was filling the guard cells, this was accomplished in Simulation_initBlock, using the exact same method as filling the rest of the cells. Thus, using <i>gcell = .true., </i>the coordinates of all cells was found with:</div><div><i><br></i></div>
<p class="">! get the integer index information for the current block<br> call Grid_getBlkIndexLimits(blockId,blkLimits,blkLimitsGC)<br><br> ! Determine size of interior of block<br> sizeX = blkLimitsGC(HIGH,IAXIS) - blkLimitsGC(LOW,IAXIS) + 1<br> sizeY = blkLimitsGC(HIGH,JAXIS) - blkLimitsGC(LOW,JAXIS) + 1<br> sizeZ = blkLimitsGC(HIGH,KAXIS) - blkLimitsGC(LOW,KAXIS) + 1<br><br> ! allocate memory for iCen variables<br> allocate(xCen(sizeX),stat=istat)<br> allocate(yCen(sizeY),stat=istat)<br> allocate(zCen(sizeZ),stat=istat)<br><br> ! Initialize<br> xCen = 0.0d0<br> yCen = 0.0d0<br> zCen = 0.0d0<br><br> ! Get interior cell coordinates<br> if (NDIM == 3) &<br> call Grid_getCellCoords(KAXIS, blockId, CENTER ,gcell, zCen, sizeZ)<br> if (NDIM >= 2) &<br> call Grid_getCellCoords(JAXIS, blockId, CENTER,gcell, yCen, sizeY)<br> call Grid_getCellCoords(IAXIS, blockId, CENTER,gcell, xCen, sizeX)</p><p class=""> ! Get lower and upper coordinates<br> blkLow(1) = xCen(1)<br> blkHigh(1) = xCen(sizeX)<br><br> blkLow(2) = yCen(1)<br> blkHigh(2) = yCen(sizeY)<br><br> blkLow(3) = zCen(1)<br> blkHigh(3) = zCen(sizeZ)<i> </i><br></p></div><div class="gmail_extra"><br></div><div class="gmail_extra">...</div><div class="gmail_extra"><br></div><div class="gmail_extra">Then later, I fill each cell in the loop led with: </div><div class="gmail_extra"><br></div><div class="gmail_extra">
do k = blkLimitsGC(LOW,KAXIS),blkLimitsGC(HIGH,KAXIS)<br> do j = blkLimitsGC(LOW,JAXIS),blkLimitsGC(HIGH,JAXIS)<br> do i = blkLimitsGC(LOW,IAXIS),blkLimitsGC(HIGH,IAXIS)<br><div class="gmail_quote"><br></div><div class="gmail_quote"><br></div><div class="gmail_quote">Originally <i>gcell</i> was <i>.false.</i> and the <i>size{X,Y,Z}</i> variables were set by the non-GC versions, and then the <i>k,j,i</i> loops were over the non GC versions. By switching to the GC version, the small CFL went away. </div><div class="gmail_quote"><br></div><div class="gmail_quote"><br></div><div class="gmail_quote">I'm having one other problem, which is maybe related. Originally I was running this with a self-made Eos (Tillotson for planetary bodies, that includes radiative transfer). I was getting the following errors immediately after initialisation: </div><br> [ 02-15-2016 09:57:48.810 ] [gr_hypreInit]: HYPRE_RELEASE_NAME=hypre, HYPRE_RELEASE_VERSION=2.10.0b, HYPRE_RELEASE_DATE=2015/01/22<br> </div><div class="gmail_extra"> [ 02-15-2016 09:57:48.812 ] memory: /proc vsize (MB): 2006.58 (min) 2008.35 (max) 2007.17 (avg)<br> [ 02-15-2016 09:57:48.813 ] memory: /proc rss (MB): 1284.28 (min) 1286.47 (max) 1285.21 (avg)<br> [ 02-15-2016 09:57:48.815 ] memory: /proc vsize (MB): 2006.58 (min) 2008.35 (max) 2007.17 (avg)<br> [ 02-15-2016 09:57:48.816 ] memory: /proc rss (MB): 1284.37 (min) 1286.56 (max) 1285.29 (avg)<br><br></div><div class="gmail_extra"> [ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: ierr= 0<br> [ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: component= 0<br> [ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: converged= F<br> [ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: |initial RHS|^2= 1.2520E+35<br> [ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: num_iterations= 500<br> [ 02-15-2016 09:57:58.820 ] [gr_hypreSolve]: Nonconv./failure: final_res_norm= 88143026.2676523<br><br></div><div class="gmail_extra"> [ 02-15-2016 09:57:58.841 ] memory: /proc vsize (MB): 2022.99 (min) 2028.98 (max) 2025.35 (avg)<br> [ 02-15-2016 09:57:58.843 ] memory: /proc rss (MB): 1307.27 (min) 1318.30 (max) 1313.13 (avg)<br><br></div><div class="gmail_extra"> [ 02-15-2016 09:57:59.569 ] [IO_writeCheckpoint] open: type=checkpoint name=Run_8000_Lmax6_RT_MG_hdf5_chk_0000<br> [ 02-15-2016 09:58:00.982 ] [IO_writeCheckpoint] close: type=checkpoint name=Run_8000_Lmax6_RT_MG_hdf5_chk_0000<br> [ 02-15-2016 09:58:00.987 ] [IO_writePlotfile] open: type=plotfile name=Run_8000_Lmax6_RT_MG_hdf5_plt_cnt_0000<br> [ 02-15-2016 09:58:01.240 ] [IO_writePlotfile] close: type=plotfile name=Run_8000_Lmax6_RT_MG_hdf5_plt_cnt_0000<br><br></div><div class="gmail_extra"> [ 02-15-2016 09:58:01.315 ] memory: /proc vsize (MB): 2022.99 (min) 2028.98 (max) 2025.48 (avg)<br> [ 02-15-2016 09:58:01.320 ] memory: /proc rss (MB): 1307.77 (min) 1321.35 (max) 1313.81 (avg)<br><br></div><div class="gmail_extra"> [ 02-15-2016 09:58:01.322 ] [Driver_evolveFlash]: Entering evolution loop<br><br></div><div class="gmail_extra"> [ 02-15-2016 09:58:01.323 ] step: n=1 t=0.000000E+00 dt=1.000000E-10<br> [ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: ierr= 0<br> [ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: component= 0<br> [ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: converged= F<br> [ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: |initial RHS|^2= 1.2520E+35<br> [ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: num_iterations= 500<br> [ 02-15-2016 09:58:12.144 ] [gr_hypreSolve]: Nonconv./failure: final_res_norm= 7426997.22777419<br> [ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: ierr= 0<br> [ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: group no= 1<br> [ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: converged= F<br> [ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: |initial RHS|^2= 1.2520E+35<br> [ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: num_iterations= 500<br> [ 02-15-2016 09:58:22.583 ] [gr_hypreSolve]: Nonconv./failure: final_res_norm= 7426997.22777419<br> [ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: ierr= 0<br> [ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: group no= 1<br> [ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: converged= F<br> [ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: |initial RHS|^2= 1.2520E+35<br> [ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: num_iterations= 500<br> [ 02-15-2016 09:58:33.292 ] [gr_hypreSolve]: Nonconv./failure: final_res_norm= 485.264327133202<br> [ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: ierr= 0<br> [ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: group no= 1<br> [ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: converged= F<br> [ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: |initial RHS|^2= 1.2520E+35<br> [ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: num_iterations= 500<br> [ 02-15-2016 09:58:43.698 ] [gr_hypreSolve]: Nonconv./failure: final_res_norm= 485.264327133202<br><br></div><div class="gmail_extra"> [ 02-15-2016 09:58:43.752 ] step: n=2 t=2.000000E-10 dt=1.000000E-10<div class="gmail_quote"><br></div><div class="gmail_quote"><br></div><div class="gmail_quote">Where hypre isn't converging. I'm not sure why this would be, as I eventually tested it with a uniform density and temperature (with temp_ele = temp_rad) for the MatRad3 case of no ions. Also, I'm surprised Flash doesn't crash when non-convergence occurs in the hypre grid solver. </div><div class="gmail_quote"><br></div><div class="gmail_quote">I've now tested it with the MatRad3 Multigamma EOS, and get the same error, so I'm unsure how this would be related to my EOS. Has anyone had similar issues with the hypre grid solver? </div><div class="gmail_quote"><br></div><div class="gmail_quote">These may be the most relevant parts of my flash.par file:</div># Flux control<br>hy_fPresInMomFlux = 1.0 # Percentage of pressure gradient for the momentum equation that is handled in momentum fluxes<br>ppmEintFluxConstructionMeth = 0<br>ppmEintCompFluxConstructionMeth = 0 #(or 1) #Advect like any old mass scalar<br>ppmEnerFluxConstructionMeth = 0<br><br># EOS PARAMETERS<br>eosMode = "dens_ie_gather"<br>eosModeInit = "dens_ie_gather"<br>hy_eosModeGc = "dens_ie_gather"<br>hy_eosModeAfter = "dens_ie_gather"<br><br># Boundary conditions<br># FLUIDS<br>xl_boundary_type = "diode"<br>xr_boundary_type = "diode"<br>yl_boundary_type = "diode"<br>yr_boundary_type = "diode"<br>zl_boundary_type = "diode"<br>zr_boundary_type = "diode"<br><br># NOTE: FLASH4 doesn't have the best set of radiation boundary<br># conditions available. They are 'reflecting/neumann', 'dirichlet', and 'vacuum'<br># 'neumann' or 'reflecting' : zero flux at boundary<br># 'dirichlet' : fixed value at the boundary (set below as radiation temperature)<br># 'vacuum' : flux across boundary is twice the boundary energy value<br>rt_mgdXlBoundaryType = "reflecting"<br>rt_mgdXrBoundaryType = "reflecting"<br>rt_mgdYlBoundaryType = "reflecting"<br>rt_mgdYrBoundaryType = "reflecting"<br>rt_mgdZlBoundaryType = "reflecting"<br>rt_mgdZrBoundaryType = "reflecting"<div class="gmail_quote"><br></div><div class="gmail_quote"><br></div><div class="gmail_quote">Thank you again for your help, </div><div class="gmail_quote"> -Mark</div><div class="gmail_quote"><br></div><div class="gmail_quote">On 9 February 2016 at 17:43, Klaus Weide <span dir="ltr"><<a href="mailto:klaus@flash.uchicago.edu" target="_blank">klaus@flash.uchicago.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">On Mon, 8 Feb 2016, Mark Richardson wrote:<br>
<br>
> > I'm pretty sure there's a bug in the unsplit_Rad Hydro_computeDT:<br>
> > source/physics/Hydro/HydroMain/unsplit_rad/Hydro_computeDt.F90<br>
> ><br>
> > Line 241 reads:<br>
> > 241: if (NDIM > 2) dt_ltemp =<br>
> > max(dt_ltemp,(abs(U(VELZ_VAR,i,j,k)-uzgrid(j))+sqrt(cfz2))*delzinv)<br>
> ><br>
> > But should be:<br>
> > 241: if (NDIM > 2) dt_ltemp =<br>
> > max(dt_ltemp,(abs(U(VELZ_VAR,i,j,k)-uzgrid(k))+sqrt(cfz2))*delzinv)<br>
<br>
</span>Hello Mark,<br>
<br>
Agreed on the bug. It also affects the version of Hydro_computeDt in<br>
source/physics/Hydro/HydroMain/unsplit/Hydro_computeDt.F90.<br>
This will be fixed in the next release.<br>
<br>
As long as as the arrays uxgrid/uygrid/uzgrid are zero-filled, which is<br>
always the case in the usual calls Hydro_computeDt by other FLASH code,<br>
this bug had the potential to cause problems only when the following<br>
conditions were satisfied:<br>
(1) 3D setup with unsplit Hydro or MHD;<br>
(2) non-cubic (logical) block sizes - specifically, NZB > NYB. <br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Thank you for reporting the bug.<br>
<span class=""><br>
> > However, I was getting the error:<br>
> > ***********************************************************<br>
> > Warning: The initial timestep is too large.<br>
> > initial timestep = 1.000000000000000E-010<br>
> > CFL timestep = 5.215016855876254E-303<br>
> > Resetting dtinit to dr_tstepSlowStartFactor*dtcfl.<br>
> > ***********************************************************<br>
> > both with and without the fix above. With the fix, I dug deeper into why<br>
> > the dt was going to zero, and found that it related to guard cells. I then<br>
> > also initialized my guard cells in Simulation_initBlock and the small dt<br>
> > problem went away.<br>
<br>
</span>This is a bit mysterious. Hydro_computeDt should not depend on any values<br>
in the guard cells of the current block at all, unless BOTH of the<br>
following are true: (A) A local CFL factor is being used (i.e., CFL_VAR<br>
is defined); and (B) you have changed the Fortran parameter hy_cflStencil<br>
in Hydro_data from 0 to something > 0. Have you made any modifications?<br>
<br>
Where and how did you fill guard cells at initialization?<br>
<br>
The following assumes that you are using PARAMESH as your Grid<br>
implementation (not UG, not PARAMESH2).<br>
<br>
In general, initializing guard cells in your Simulation_initBlock should<br>
not be necessary (but cannot hurt either). The logic for calling<br>
Simulation_initBlock and then calling Eos and potentially filling guard<br>
cells (indirectly, via Grid_markRefineDerefine) is in gr_expandDomain.F90,<br>
q.v. The first calls to Hydro_computeDt come from Driver_verifyInitDt.<br>
It is indeed possible that at the point when Driver_verifyInitDt calls<br>
Hydro_computeDt, guard cells have NOT been properly filled or updated,<br>
although often they will have been updated as an effect of what happened<br>
in gr_expandDomain. But again, Hydro_computedDt should not normally need<br>
them (except when (A)&&(B) apply, see above); that's why the<br>
call Grid_fillGuardCells<br>
in Driver_verifyInitDt.F90 is commented out.<br>
<div class=""><div class="h5"><br>
> > Oddly, a collaborator took my SimulationMain/Code/ and ran it, and got the<br>
> > small dt error again, even with the guardCells initialized. I forgot to<br>
> > give him the above fix to Hydro_computeDT. He made that fix, and now his<br>
> > small dt problem is gone as well.<br>
> ><br>
> > I'm using the hypre unsplit solver, with radiation and mgd on. Any<br>
> > thoughts on why an intiial FillGuardCells call isn't fixing the problem?<br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br>Mark Richardson<br><div>Beecroft Postdoctoral Fellow, Department of Physics<div>Denys Wilkinson Building - 555F</div><div>University of Oxford<div><a href="mailto:Mark.Richardson@physics.ox.ac.uk" target="_blank">Mark.Richardson@physics.ox.ac.uk</a><br><div>+44 1865 280793</div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</div></div>