to a little figure out what Guido had done & why -- but there is a neat saving of force
evaluations across the nesting time boundary making use of linearity of the leapP in dt.
I cleaned up the printing, reduced the volume of code, in the process sharing printing
between all integrators. Placed an assert that the total integration time for all integrators
must match at end of trajectory.
Have now verified e-dH = 1 for nested integrators in Wilson/Wilson runs with both
Omelyan and with Leapfrog so substantial confidence gained.
Example of multiple levels in the WilsonFermion hmc test.
Merge remote-tracking branch 'upstream/master'
Conflicts:
lib/qcd/hmc/HMC.h
lib/qcd/hmc/integrators/Integrator.h
lib/qcd/hmc/integrators/Integrator_algorithm.h
tests/Test_simd.cc
connect the "Real" default precisoin to a configure flag.
Have RealF, RealD and Real types, where Real is compile target dependent single/double,
RealF is single and RealD is double etc..
cut at Conjugate gradient. Also copied in Remez, Zolotarev, Chebyshev from
Mike Clark, Tony Kennedy and my BFM package respectively since we know we will
need these. I wanted the structure of
algorithms/approx
algorithms/iterative
etc.. to start taking shape.
icpc compiles happy on MacOSX both with -xCOMMON-AV512 and native AVX
gcc-5 does not compile happy; can work around by renaming lattice peek/poke/transpose/trace templates
relative to tensor ones, but gcc goes into a recursive template instantiation due to
matching error. I think this is a gcc bug and have filed a report https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66153
since testing requirements now go beyond what a single Grid_main.cc can do.
Will need a more organised src tree for this and will require substantial reorg of build system.