between std::real and Grid::real shouldn't have been necessary and I don't
know why only the icpc v16.0 on babbage hits it.
May need a longer term rename of Grid::real or some careful EnableIf work.
class, changing the nature of covariant Cshifts used in
plaquettes, rectangles and staples.
As a result same code is used for the plaq and rect action independent of the BC type.
Should probably isolate the BC in a separate class that Gimpl takes as a template param.
Do the same with smearing policies.
This would then allow composition of BC with smearing etc....
Conclusion: c++11 distributions not thread safe and must us distinct dist as well as distinct engine
per site. Makes sense when you think of box muller. Also added a reset of dist on fill to ensure
repro across checkpoints.
Can save and restore RNG state via new (serial) I/O routines in a NERSC header style file.
Store a Parallel (one per site) and a single serial RNG file.
Thought I had already committed these.
Believe I have got the Gparity fermion force working.
* tests/Test_gpdwf_force.cc -- correctly predicts dS for two flavour pseudofermion
based on a small dt update of U field.
* tests/Test_hmc_EODWFRatio_Gparity.cc -- ran 1 trajectory on 8^4 with dH=0.21.
Need to accumulate a full plaquette log to believe fully which will take some hours of run time.
preparing to pass integrator, smearing, bc's as policy classes to hmc.
Propose to unify "integrator" and integrator algorithm in a base/derived
way to override step. Want to read through ForceGradient to ensure
that abstraction covers the force gradient case.
to a little figure out what Guido had done & why -- but there is a neat saving of force
evaluations across the nesting time boundary making use of linearity of the leapP in dt.
I cleaned up the printing, reduced the volume of code, in the process sharing printing
between all integrators. Placed an assert that the total integration time for all integrators
must match at end of trajectory.
Have now verified e-dH = 1 for nested integrators in Wilson/Wilson runs with both
Omelyan and with Leapfrog so substantial confidence gained.
Number of IO MPI tasks can be varied by selecting which
dimensions use parallel IO and which dimensions use Serial send to boss
I/O.
Thus can neck down from, say 1024 nodes = 4x4x8x8 to {1,8,32,64,128,256,1024} nodes
doing the I/O.
Interpolates nicely between ALL nodes write their data, a single boss per time-plane
in processor space [old UKQCD fortran code did this], and a single node doing all I/O.
Not sure I have the transfer sizes big enough and am not overly convinced fstream
is guaranteed to not give buffer inconsistencies unless I set streambuf size to zero.
Practically it has worked on 8 tasks, 2x1x2x2 writing /cloning NERSC configurations
on my MacOS + OpenMPI and Clang environment.
It is VERY easy to switch to pwrite at a later date, and also easy to send x-strips around from
each node in order to gather bigger chunks at the syscall level.
That would push us up to the circa 8x 18*4*8 == 4KB size write chunk, and by taking, say, x/y non
parallel we get to 16MB contiguous chunks written in multi 4KB transactions
per IOnode in 64^3 lattices for configuration I/O.
I suspect this is fine for system performance.
derived relationship. Have Text/Binary/Xml versions of
Reader & Writer.
Any new Reader/Writer class inheriting the interface can give object serialisation
to any desired format now.
new file: lib/serialisation/BaseIO.h
modified: lib/serialisation/BinaryIO.h
modified: lib/serialisation/Serialisation.h
modified: lib/serialisation/TextIO.h
modified: lib/serialisation/XmlIO.h
The test uses the Xml, Binary and Text formats as well as cout << Object.
Test_serialisation has an example of *code* *free* object serialisation
to both ostream and to XML using macro magic.
Implementing TextReader/TextWriter, YAML, JSON etc.. should be trivial
and we can use configure time options to select the default "Reader" typedef.
Present done with
"using XMLPolicy::Reader"
to pick up the default serialisation strategy.
Azusa is working hard on the rectangle term and we'll hopefully start reproducing plaquettes
from RBC-UKQCD parameters soon !
My new laptop is pretty warm and is starting to groan ;)
So valence sector looks ok.
FermionOperatorImpl.h provides the policy classes.
Expect HMC will introduce a smearing policy and a fermion representation change policy template
param. Will also probably need multi-precision work.
* HMC is running even-odd and non-checkerboarded (checked 4^4 wilson fermion/wilson gauge).
There appears to be a bug in the multi-level integrator -- <e-dH> passes with single level but
not with multi-level.
In any case there looks to be quite a bit to clean up.
This is the "const det" style implementation that is not appropriate yet for clover since
it assumes that Mee is indept of the gauge fields. Easily fixed in future.
So valence sector looks ok.
FermionOperatorImpl.h provides the policy classes.
Expect HMC will introduce a smearing policy and a fermion representation change policy template
param. Will also probably need multi-precision work.
* HMC is running even-odd and non-checkerboarded (checked 4^4 wilson fermion/wilson gauge).
There appears to be a bug in the multi-level integrator -- <e-dH> passes with single level but
not with multi-level.
In any case there looks to be quite a bit to clean up.
This is the "const det" style implementation that is not appropriate yet for clover since
it assumes that Mee is indept of the gauge fields. Easily fixed in future.
Allows multi-precision work and paves the way for alternate BC's and such like
allowing for example G-parity which is important for K pipi programme.
In particular, can drive an extra flavour index into the fermion fields
using template types.
Allows multi-precision work and paves the way for alternate BC's and such like
allowing for example G-parity which is important for K pipi programme.
In particular, can drive an extra flavour index into the fermion fields
using template types.
Example of multiple levels in the WilsonFermion hmc test.
Merge remote-tracking branch 'upstream/master'
Conflicts:
lib/qcd/hmc/HMC.h
lib/qcd/hmc/integrators/Integrator.h
lib/qcd/hmc/integrators/Integrator_algorithm.h
tests/Test_simd.cc
Example of multiple levels in the WilsonFermion hmc test.
Merge remote-tracking branch 'upstream/master'
Conflicts:
lib/qcd/hmc/HMC.h
lib/qcd/hmc/integrators/Integrator.h
lib/qcd/hmc/integrators/Integrator_algorithm.h
tests/Test_simd.cc