[FLASH-USERS] Fwd: Conduction Issues/Inconsistencies

James Guillochon jfg at ucolick.org
Tue Nov 5 00:26:06 EST 2013


Hi Klaus, thanks for your reply, and thanks for clarifying how the Conductivity interface operates, I did not notice it was overloaded. 

The issue in the fullState routines themselves are references to 3T variable names, which are not properly substituted for single temperature analogs. An example is “EOS_CVELE” in Conductivity_fullState within the SpitzerHighZ module, this variable is not defined when not using 3T. The other Conductivity modules seem to have similar naming issues.

As for the PFFT issues, I’ll try running on smaller processor numbers, but currently the crash was occurring ~30 minutes into a high-res run. I’ll let you know if I can isolate a simple test case.

--
James Guillochon
Einstein Fellow at Harvard CfA
jguillochon at cfa.harvard.edu

On November 4, 2013 at 2:44:01 PM, Klaus Weide (klaus at flash.uchicago.edu) wrote:


> Hi all, I've been attempting to use the Conductivity + Diffuse modules in  
> FLASH 4.0.1, and I've encountered a number of issues/inconsistencies I was  
> hoping to have resolved:  

> - It seems that most of the Conductivity modules include a  
> "Conductivity_fullState" routine, which does not compile without errors  
> unless the 3T EOS is used, but as far as I can tell  
> "Conductivity_fullState" is not called by anything in the FLASH source. The  
> way the rest of the Conductivity modules are written seems to indicate that  
> they should work with a single temp EOS, but the inability to compile the  
> fullState function makes me a bit wary. Are the Conductivity modules written  
> to support a single temp EOS?  

James,  

Yes, the Conductivity unit should be compilable with a 1T EOS, but I  
am not sure whether we are testing this. Please let us know how you  
set up a test and what compilation errors you are getting.  

Note that even though Conductivity_fullState may not be invoked by  
that name, it may still be invoked though the generic interface named  
just "Conductivity". That is the effect of the following lines in Conductivity_interface.F90:  

========================================================================  
interface Conductivity  

subroutine Conductivity(xtemp,xden,massfrac,isochoricCond,diff_coeff,component)  

real, intent(IN) :: xtemp  
real, intent(IN) :: xden  
real, intent(OUT) :: diff_coeff  
real, intent(OUT) :: isochoricCond  
real, intent(IN) :: massfrac(NSPECIES)  
integer, intent(IN) :: component  

end subroutine Conductivity  

subroutine Conductivity_fullState(solnVec,isochoricCond,diffCoeff,component)  

real, intent(IN) :: solnVec(NUNK_VARS)  
real, OPTIONAL, intent(OUT) :: diffCoeff  
real, OPTIONAL, intent(OUT) :: isochoricCond  
integer, OPTIONAL, intent(IN) :: component  

end subroutine Conductivity_fullState  
end interface  
========================================================================  

When the compiler encounters a 'call Conductivity(...)', it decides  
based on the actual aguments whether to call the implementation named  
'Conductivity' or the implementation named 'Conductivity_fullState'.  


> - I am getting errors/warnings in the Pfft solver when using the implicit  
> mode from DiffuseMain/Split on a number of processors not equal to a power  
> of 2 (I think, 64 processors worked fine, 96 processors produced errors for  
> otherwise identical simulations). The four types of messages I see in the  
> output are (note that NONE of these are seen when running on 64 processors):  

> [gr_pfftInitMetadata]: WARNING... making work arrays larger artificially!!!  

> (INFO) Processor: 11 has no pencil grid points.  

The above two messages are warnings or informational. They don't mean  
that anything is wrong. They may indicate, however, that the grid is  
configured in some "unusual" way, and / or that performance may be  
decreased. The PFFT solver attempts to factor the number of processors  
that participate in the PFFT solve in some automatic way, and that may  
not always work out well. (Wrost case should be when that number is a  
prime.) The code still SHOULD handle this correctly.  

> perfmon: ran out of space for timer, "guardcell internal", cannot time this  
> timer with perfmon  

> [Timers_start] Ran out of space on timer call stack. Probably means calling  
> start without a corresponding stop.  

I don't think I have seen these two in connection with the PFFT solver.  
Something is wrong with the Timers_start / Timers_stop calls, we would need  
a test case to investigate.  

> - The timestep limiter seems to still apply when using an implicit solver,  
> although as far as I understand there's no need to limit the timestep when  
> solving implicitly. Is this just an oversight? Should I just set  
> dt_diff_factor to a large number when using the implicit solver?  

Yes, often this is being set to something ridiculously high like  
1.e100. You may still find the dt_Diff infomation in the standard  
output sometimes useful.  

Klaus  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://flash.rochester.edu/pipermail/flash-users/attachments/20131105/0ee2f130/attachment.htm>


More information about the flash-users mailing list