[ET Trac] [Einstein Toolkit] #1995: McLachlan constraint tests fail

Einstein Toolkit trac-noreply at einsteintoolkit.org
Thu Jun 29 13:11:54 CDT 2017


#1995: McLachlan constraint tests fail
------------------------------------+---------------------------------------
  Reporter:  hinder                 |       Owner:                                                   
      Type:  defect                 |      Status:  new                                              
  Priority:  critical               |   Milestone:  ET_2017_05                                       
 Component:  EinsteinToolkit thorn  |     Version:  development version                              
Resolution:                         |    Keywords:  McLachlan constraints tests compiler optimization
------------------------------------+---------------------------------------

Comment (by knarf):

 Update from Peter Diener:

 I have made a little progress, but am still confused.

 First I made sure that I could reproduce the problem on my laptop with the
 intel 17 compilers. Then I made a configuration with no optimisation
 (-O0) and ensured that the testsuite passes in this case.

 Then, as the default McLachlan produces an explicitly vectorized code, I
 also
 made sure that the I could reproduce the same behaviour with the
 non-vectorized version (i.e. ML_BSSN_NV). This turned out to be the case.
 I
 then inserted a printf statement at the end of the loop in
 ML_BSSN_NV_ConstraintsInterior.cc and printed out the calculated data
 for H at the coordinates that is output to ml_admconstraints-ml_ham.x.asc.
 I
 found that the same numbers are
 calculated in both the -O2 and -O0 versions of the executables. However,
 and this is very interesting, those numbers do not match either of the
 output files produced by the -O2 and -O0 executables as you can see from
 the
 attached plot, where I plot the data from the Cactus output files for -O0
 (purple plus) and -O2 (green cross), the printf output from -O0 (blue
 asterisk
 )and -O2 (orange empty square) as well as from the actual test suite data
 (yellow filled square). I made sure that I printed from exactly the same
 coordinates (x, y and z) that are available in the Cactus output files. So
 it
 looks to me like the data gets modified
 after being calculated and before getting output. And somehow the
 modification
 is different at different optimization level.

 I'm not sure where this happens.

-- 
Ticket URL: <https://trac.einsteintoolkit.org/ticket/1995#comment:22>
Einstein Toolkit <http://einsteintoolkit.org>
The Einstein Toolkit


More information about the Trac mailing list