1
0
mirror of https://github.com/paboyle/Grid.git synced 2025-06-14 05:07:05 +01:00

Compare commits

..

454 Commits

Author SHA1 Message Date
e307bb7528 Reorganise to abstract kernels that know about the lattice layout.
Move these back into grid.
2018-09-04 12:30:00 +01:00
5b8b630919 Finished the four quark optimisation for Bag parameters.
To do:

   Abstract the cache blocking from the contraction with lambda functions.
   Share code between PionFieldXX with and without momentum. Share with the Meson field code somehow.
   Assemble the WWVV in a standalone routine.
   Play similar lambda function trick for the four quark operator.
   Hack it first by doing the MesonField Routine in here too.
2018-08-28 14:11:03 +01:00
81287133f3 New files 2018-08-28 11:32:41 +01:00
bd27940f78 Reorder the loop 2018-08-28 11:32:23 +01:00
d45647698d Extra test code 2018-08-28 11:31:19 +01:00
d6ac6e75cc Some query functions 2018-08-28 11:30:51 +01:00
ba34d7b206 Pion Field test module 2018-08-28 11:30:14 +01:00
80003787c9 Use MKL DGEMM 2018-08-28 11:14:27 +01:00
f523dddef0 Remove verbose 2018-08-28 11:14:07 +01:00
3791a38f7c Optimised the MesonField a bit more 2018-08-01 08:27:27 +01:00
142f7b0c86 Updated the A2A Meson Field module 2018-07-31 15:58:02 +01:00
60c43151c5 Merge branch 'feature/hadrons-a2a' of https://github.com/paboyle/Grid into feature/hadrons-a2a 2018-07-31 01:09:02 +01:00
e036800261 Eigen fix 2018-07-31 01:08:42 +01:00
62900def36 Merge branch 'feature/hadrons-a2a' of https://github.com/paboyle/Grid into feature/hadrons-a2a 2018-07-31 00:36:26 +01:00
e3a309a73f Eigen happiness 2018-07-31 00:35:17 +01:00
00b92a91b5 Optimising 2018-07-28 23:46:22 +01:00
65533741f7 7 moms 2018-07-28 16:17:47 +01:00
dc0259fbda Merge pull request #173 from fionnoh/feature/hadrons-a2a
Changes to meson field benchmark. Now includes the gammas in the fina…
2018-07-27 23:03:56 +01:00
131a6785d4 Merge branch 'feature/hadrons-a2a' into feature/hadrons-a2a 2018-07-27 23:03:42 +01:00
44f4f5c8e2 Momentum loop 2018-07-27 23:00:16 +01:00
2679df034f Changes to meson field benchmark. Now includes the gammas in the final part of the naive method, both methods compute
lhs^dag*Gamma*rhs (previously Gamma*lhs^dag*rhs), and checks results.
2018-07-27 18:31:10 +01:00
71e1006ba8 Updated meson field benchmark for dirac structures 2018-07-26 09:09:29 +01:00
cce339deaf Merge pull request #172 from fionnoh/feature/hadrons
feature/hadrons -> feature/hadrons-a2a
2018-07-25 17:20:19 +00:00
24128ff109 Changes needed for MF benchmark to work with comms correctly 2018-07-23 15:51:37 +01:00
34e9d3f0ca Moved the creation and resizing of the v and w high modes from the A2A class to the A2A module and made them an output of the module. This means that they have to be inputs of the contration modules and they will freed from memory if they are no longer needed. 2018-07-22 14:40:31 +01:00
c995788259 Added ImportUnphysicalFermion and included appropriate logic for 5d w vectors in A2A code 2018-07-21 00:08:11 +01:00
94c7198001 Added ZFIMPL to A2AMeson contraction 2018-07-20 23:08:22 +01:00
04d86fe9f3 Removed overly verbose print statement 2018-07-20 21:38:19 +01:00
b78074b6a0 Removed a Dminus from high mode v and removed duplication pf D_oo code 2018-07-20 16:55:24 +01:00
7dfd3cdae8 Inclusion of ExportPhysicalFermionSource that fixes a bug in the low mode w vectors 2018-07-20 15:45:43 +01:00
cecee1ef2c Merge branch 'develop' of github.com:paboyle/Grid into feature/hadrons 2018-07-20 13:37:50 +01:00
355d4b58be Merge branch 'feature/hadrons' of github.com:fionnoh/Grid into feature/hadrons 2018-07-19 16:07:54 +01:00
2c54a536f3 Moved the meson field inner product to its own header file 2018-07-19 15:56:52 +01:00
d868a45120 Cleaned up some stuff that was erroneously included in a previous "trash" commit. Leaving in the mySliceInnerProdct function for now as it speeds up mesonfield creation quite a lot for 24^3 tests 2018-07-16 16:19:59 +01:00
9deae8c962 A2A meson field contraction code 2018-07-16 14:18:45 +01:00
db86cdd7bd Possible trash commit 2018-07-10 13:30:45 +01:00
ec9939c1ba Test for faster implementation of meson field inner loop
This should be possible to cache block at outer levels, global sum across nodes not performed
and deferred to caller to block them all into a big all reduce.
Nc=3 and Fermion is hard coded in an ugly way. We might think about benchmarking whether
a product without the conjugate should be made available by Grid.

It is not clear whether the explicit unroll, or the performing of conjugate on left once
was the real source of the speed up.

Gives 70-80 GF/s on my laptop (single) half that double, and 70GB/s to cache.

This is competitive with dslash and a reasonable stopping point for the optimisation. If necessary we can revisit.
2018-07-10 12:38:51 +01:00
f74617c124 Added ZFIMPL to meson field module 2018-07-03 14:04:53 +01:00
8c6a3921ed Merge remote-tracking branch 'upstream/feature/hadrons' into feature/hadrons 2018-07-03 11:35:14 +01:00
a8a15dd9d0 Hadrons: code cleaning 2018-07-02 17:52:39 +01:00
3ce68a751a Hadrons: stout smearing module 2018-07-02 17:52:04 +01:00
daa0977d01 Included a print statement that indicates that the guess is being subtracted from the solve. 2018-06-28 16:34:56 +01:00
a2929f4384 Removed A2A contraction module and replaced it with the beginnings of a meson field module 2018-06-28 16:17:26 +01:00
7fe3974c0a Included eigenPacks and action as references, not inputs, of A2A module. They now now longer need to be parameters in the meson field modules. 2018-06-28 16:14:49 +01:00
f7e86f81a0 Changes A2A class to make use of the new Solver class 2018-06-28 16:14:16 +01:00
fecec803d9 Merge branch 'feature/hadrons' of https://github.com/paboyle/Grid into feature/hadrons 2018-06-28 16:13:43 +01:00
8fe9a13cdd Merge branch 'feature/hadrons' of https://github.com/paboyle/Grid into feature/hadrons 2018-06-28 16:13:07 +01:00
d2c42e6f42 Hadrons: scaled DWF action 2018-06-26 14:59:33 +01:00
049cc518f4 Hadrons: introduction message 2 2018-06-25 19:08:39 +01:00
2e1c66897f Hadrons: introduction message 2018-06-25 19:08:22 +01:00
adcef36189 Hadrons: Möbius DWF action 2018-06-25 15:58:35 +01:00
2f121c41c9 Commiting reation of meson field code before a merge with the upstream branch feature/hadrons 2018-06-25 12:20:46 +01:00
e0ed7e300f Hadrons: spurious Dminus removed 2018-06-22 16:33:43 +02:00
485207901b Merge branch 'develop' into feature/hadrons 2018-06-22 16:15:32 +02:00
c760f0a4c3 Hadrons: remove make_5D/4D functions and FreeProp fix 2018-06-22 16:12:46 +02:00
c84eeedec3 Hadrons: GaugeProp module for z-Wilson actions 2018-06-22 15:53:22 +02:00
1ac3526f33 Small changes to the A2A header and module 2018-06-22 12:29:42 +01:00
0de090ee74 Temporarily added in the contraction code that produced the working 2-pt function. This is commited for reference only and will be removed in the next push. 2018-06-22 12:28:41 +01:00
91405de3f7 Hadrons: new solver exposing fermion matrix and generic source/solve import/export 2018-06-22 12:14:37 +02:00
8fccda301a Fixed a bug where the guess was always subtracted after the solve and included appropriate weights for the sources in the one case we're looking at now. More work needs to be done to make the 5d/4d source logic less brittle. 2018-06-21 16:36:59 +01:00
7a0abfac89 Restructured the class that computes and returns the A2A vectors. 2018-06-21 16:36:06 +01:00
ae37fda699 A more elegant way to subtract guesses from solve and a bool check before verifying residual 2018-06-20 16:07:40 +01:00
b5fc5e2030 All to all module update that hit a promising milestone. Commiting for a reference for future changes. 2018-06-20 10:59:07 +01:00
8db0ef9736 Merge pull request #168 from jch1g10/feature/qed-fvol
Feature/qed fvol
2018-06-08 20:09:06 +02:00
95d4b46446 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-06-08 11:30:29 +01:00
5dfd216a34 Better thread safety 2018-06-04 21:08:44 +01:00
c2e8d0aa88 Solve g++ problem on the lanczos test 2018-06-04 18:34:15 +01:00
0fe5aeffbb Merge branch 'feature/hadrons' into feature/qed-fvol 2018-06-04 16:59:43 +01:00
7fbc469046 Merge branch 'develop' into feature/hadrons 2018-06-04 16:58:30 +01:00
bf96a4bdbf Merge branch 'master' into develop 2018-06-04 14:03:11 +01:00
84685c9bc3 Overflow fix 2018-06-04 13:42:07 +01:00
a8d4156997 Added a Hadrons module that computes the all-to-all v and w vectors 2018-05-31 17:18:58 +01:00
c18074869b Changes to Hadrons SchurRB solver to allow for a subtract_guess boolean to be passed 2018-05-31 17:17:16 +01:00
f4c6d39238 CHanges made to SchurRB solvers to allow for the subtraction of a guess after solve 2018-05-31 17:16:20 +01:00
200d35b38a Merge branch 'develop' into feature/hadrons 2018-05-28 11:52:47 +02:00
eb52e84d09 Merge branch 'feature/hadrons' of github.com:paboyle/Grid into feature/hadrons 2018-05-28 11:50:27 +02:00
72abc34764 Merge pull request #166 from guelpers/feature/hadrons
Feature/hadrons
2018-05-28 11:49:46 +02:00
e3164d4c7b Hadrons: env function to get volume in double 2018-05-28 11:39:17 +02:00
f5db386c55 Change MODULE_REGISTER_NS -> MODULE_REGISTER in UnitEM, ScalarVP and VPCounterTerms 2018-05-22 16:16:21 +01:00
294ee70a7a Merge branch 'feature/hadrons' into feature/qed-fvol
# Conflicts:
#	extras/Hadrons/modules.inc
#	lib/qcd/action/gauge/Photon.h
2018-05-21 18:02:41 +01:00
013ea4e8d1 Merge branch 'feature/staggered-comms-compute' into develop 2018-05-21 13:11:56 +01:00
7fbbb31a50 Merge branch 'develop' into feature/staggered-comms-compute
Conflicts:
	lib/qcd/action/fermion/ImprovedStaggeredFermion.cc
2018-05-21 13:07:29 +01:00
0e127b1fc7 New file single prec test 2018-05-21 12:57:13 +01:00
68c028b0a6 Comment 2018-05-21 12:54:25 +01:00
255d4992e1 Hadrons: stochastic scalar SU(N) free field fix 2018-05-18 20:49:55 +01:00
a0d399e5ce Hadrons: yet other attempts at EMT NPR 2018-05-18 20:49:26 +01:00
fd3b2e945a Hadrons: don't right result with empty stem 2018-05-18 20:48:24 +01:00
b999984501 Merge branch 'develop' into feature/hadrons 2018-05-15 13:53:57 +01:00
7836cc2d74 No checksum output on log for scidac 2018-05-15 10:10:08 +01:00
a61e0df54b Travis fix for Lime 2018-05-14 19:56:12 +01:00
9d835afa35 Attempt at solving the FP exception in the QED code 2018-05-14 19:05:54 +01:00
5e3be47117 Hadrons: scalar SU(N) various fixes 2018-05-14 18:58:39 +01:00
48de706dd5 Merge branch 'develop' into feature/hadrons 2018-05-11 18:06:40 +01:00
f871fb0c6d check file is opened correctly in the Lime reader 2018-05-11 18:06:28 +01:00
93771f3099 Hadrons: scalar SU(N) stochastic free field 2018-05-10 22:29:48 +01:00
8cb205725b Merge branch 'develop' into feature/hadrons 2018-05-09 23:56:35 +01:00
9ad580d82f Hadrons: format fix 2018-05-07 21:38:15 +01:00
899f961d0d Hadrons: eigenvalue metadata saved with 16 significant digits 2018-05-07 21:37:03 +01:00
54d789204f more general implementation of the precision interface for serialisers 2018-05-07 21:17:46 +01:00
25828746f3 XML precision scientific with 16 digits by default 2018-05-07 21:04:31 +01:00
f362c00739 Hadrons: better handling of automatic directory creation 2018-05-07 19:43:40 +01:00
25d1cadd3b Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-05-07 18:55:09 +01:00
c24d53bbd1 Further debug of RNG I/O 2018-05-07 18:55:05 +01:00
2017e4e3b4 Hadrons: more verbose directory creation error 2018-05-07 18:12:22 +01:00
27a4d4c951 Hadrons: multi-file eigenpack in separate directory 2018-05-07 17:52:54 +01:00
2f92721249 Merge branch 'develop' into feature/hadrons 2018-05-07 17:26:47 +01:00
3c7a4106ed Trap for deadly empty comm thread option 2018-05-07 17:26:39 +01:00
3252059daf Hadrons: multi-file support for eigenpacks 2018-05-07 17:25:36 +01:00
6eed167f0c Merge branch 'release/0.8.1' 2018-05-04 17:34:11 +01:00
4ad0df6fde Bump volume for Gerardo 2018-05-04 17:33:23 +01:00
661381e881 Merge branch 'develop' into feature/hadrons 2018-05-04 14:52:17 +01:00
68a5079f33 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-05-04 14:13:54 +01:00
8634e19f1b Update 2018-05-04 14:13:35 +01:00
9ada378e38 Add timing 2018-05-04 10:58:01 +01:00
9d9692d439 Fix double vs float in boundary phases 2018-05-03 16:40:16 +01:00
0659ae4014 Merge branch 'develop' into feature/hadrons 2018-05-03 16:20:22 +01:00
bfbf2f1fa0 no threaded stencil benchmark if OpenMP is not supported 2018-05-03 16:20:01 +01:00
dd6b796a01 Hadrons: scalar SU(N) volume factor fix 2018-05-03 16:19:17 +01:00
52a856b4a8 FreeProp module for Hadrons 2018-05-03 12:33:20 +01:00
04190ee7f3 5D free propagator for DWF and boundary conditions for free propagators 2018-05-03 12:31:36 +01:00
587bfcc0f4 Add Timing 2018-05-03 12:10:31 +01:00
2700992ef5 Merge remote-tracking branch 'upstream/feature/hadrons' into feature/hadrons 2018-05-03 10:01:52 +01:00
8c658de179 Compressor speed up (a little); streaming stores 2018-05-02 17:52:16 +01:00
ba37d51ee9 Debugging the RNG IO 2018-05-02 15:32:06 +01:00
4f4181c54a Merge branch 'feature/staggered-comms-compute' of https://github.com/paboyle/Grid into feature/staggered-comms-compute 2018-05-02 14:59:13 +01:00
4d4ac2517b Adding Scalar field theory example for Scidac format 2018-05-02 14:36:32 +01:00
e568c24d1d Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-05-02 14:29:25 +01:00
b458326744 Checkpointer module update 2018-05-02 14:29:22 +01:00
6e7d5e2243 HMC: added Scidac checkpointer and support for metadata 2018-05-02 14:28:59 +01:00
b35169f1dd MultiShift for Staggered 2018-05-02 14:22:37 +01:00
441ad7498d add Iterative counter 2018-05-02 14:21:30 +01:00
6f6c5c549a Split off gparity 2018-05-02 14:11:23 +01:00
1584e17b54 Revert to fast versoin 2018-05-02 14:10:55 +01:00
12982a4455 Hypercube optimisation 2018-05-02 14:10:21 +01:00
172f412102 shmget reintroduce 2018-05-02 14:07:41 +01:00
a64497265d TIming 2018-05-02 14:07:28 +01:00
ca639c195f Merge branch 'develop' into feature/hadrons 2018-05-01 14:07:32 +01:00
edc28dcfbf Hadrons: scalar SU(N) 2-pt fix 2018-05-01 14:02:31 +01:00
c45f24a1b5 Improvements for tesseract 2018-04-30 21:50:00 +01:00
aaf37ee4d7 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-04-27 11:45:13 +01:00
1dddd17e3c Benchmark improvements from tesseract 2018-04-27 11:44:46 +01:00
661f1d3e8e Merge branch 'release/0.8.0' into develop 2018-04-27 11:22:33 +01:00
edcf9b9293 Merge branch 'release/0.8.0' 2018-04-27 11:13:19 +01:00
fe6860b4dd Update with LIME library guard 2018-04-27 08:57:34 +01:00
d6406b13e1 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-04-27 07:52:56 +01:00
e369d7306d Rename 2018-04-27 07:51:44 +01:00
9f8d63e104 Roll over version 2018-04-27 07:51:12 +01:00
9b0240d101 Hot start test 2018-04-27 07:50:51 +01:00
b27f0e5a53 Control over IO 2018-04-27 07:50:15 +01:00
75e4483407 Stronger convergence test 2018-04-27 07:49:57 +01:00
0734e9ddd4 Debugging Scatter_plane_simple 2018-04-27 14:39:01 +09:00
809b1cdd58 Bug fix for MPI running ; introduced last night 2018-04-27 05:19:10 +01:00
1be8089604 Clean compile 2018-04-26 23:42:45 +01:00
3e0eff6468 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-04-26 23:00:46 +01:00
7ecc47ac89 Quenched test compile 2018-04-26 23:00:28 +01:00
e9f1ac09de static 2018-04-26 23:00:08 +01:00
fa0d8feff4 Performance of CovariantCshift now non-embarrassing. 2018-04-26 17:56:27 +01:00
49b8501fd4 Merge branch 'develop' into feature/hadrons 2018-04-26 17:33:50 +01:00
d47484717e Hadrons: scalar SU(N) result handling improvement 2018-04-26 17:32:37 +01:00
05b44aef6b Merge branch 'develop' of https://github.com/paboyle/Grid into develop
Conflicts:
	benchmarks/Benchmark_su3.cc
2018-04-26 15:38:49 +01:00
03e9832efa Use macros for bare openmp 2018-04-26 14:50:02 +01:00
28a375d35d Force static 2018-04-26 14:49:42 +01:00
3b06381745 Guard bare openmp statemetn with ifdef 2018-04-26 14:48:57 +01:00
91a0a3f820 Improvement 2018-04-26 14:48:35 +01:00
8f44c799a6 Saving the benchmarking tests for Cshift 2018-04-26 14:48:03 +01:00
96272f3841 Merge staggered fix linear operator and reduction 2018-04-26 10:33:19 +01:00
5c936d88a0 Merge branch 'feature/staggered-comms-compute' of https://github.com/paboyle/Grid into feature/staggered-comms-compute 2018-04-26 10:18:37 +01:00
1c64ee926e Faster staggered operator with m^2 term trivial used 2018-04-26 10:17:49 +01:00
2cbb72a81c Provide info if EE term is trivial (m^2 factor)
Better timing in staggered 4d case
2018-04-26 10:10:07 +01:00
31d83ee046 Enable special treatment of constEE cases 2018-04-26 10:08:46 +01:00
a9e8758a01 Improvements to staggered tests timings 2018-04-26 10:08:05 +01:00
3e125c5b61 Faster linalg on CG optimised against staggered
Sum overhead is bigger for staggered
2018-04-26 10:07:19 +01:00
eac6ec4b5e Faster reductions, important on single node staggered 2018-04-26 10:03:57 +01:00
213f8db6a2 Microsecond resultion 2018-04-26 10:01:39 +01:00
6358f35b7e Debug of previous commit 2018-04-26 14:18:11 +09:00
43f5a0df50 More timers in the integrator 2018-04-26 12:01:56 +09:00
c897878776 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-04-26 11:31:57 +09:00
cc6eb51e3e Hadrons: macro refactoring for library portability 2018-04-25 16:49:14 +01:00
507009089b Merge remote-tracking branch 'upstream/feature/hadrons' into feature/hadrons 2018-04-25 09:36:39 +01:00
2baf193031 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-04-25 00:14:03 +01:00
362ba0443a Cshift updates 2018-04-25 00:12:11 +01:00
276a2353df Move constructor 2018-04-25 00:11:07 +01:00
b234784c8e Hadrons: scalar SU(N) takes operator pairs now 2018-04-24 19:52:12 +01:00
6ea2a8b7ca Hadrons: scheduler shows starting value 2018-04-24 19:51:47 +01:00
c1d0359aaa Hadrons: scalar SU(N) kinetic term saves trace 2018-04-24 19:51:22 +01:00
047ee4ad0b Hadrons: scalar SU(N) cleanup 2018-04-24 19:50:58 +01:00
a13106da0c Hadrons: scalar SU(N) gradient 2018-04-24 19:50:30 +01:00
75113e6523 Hadrons: Scalar SU(N) variable name update 2018-04-24 19:49:27 +01:00
325c73d051 Hadrons: module template update 2018-04-24 19:48:54 +01:00
b25a59e95e Hadrons: mitigation of GCC/Intel compiler bug not generating defaulted destructors 2018-04-24 17:20:25 +01:00
c5b9147b53 Correction of a minor bug in the su3 benchmark 2018-04-24 08:03:57 -07:00
64ac815fd9 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-04-24 17:27:38 +09:00
a1be533329 Corrected Flop count in Benchmark su3 and expanded the Wilson flow output 2018-04-24 01:19:53 -07:00
7c4533797f Hadrons: scalar SU(N) EMT improvement term optional 2018-04-23 22:46:39 +01:00
af84fd65bb Hadrons: missing dependency message improvement 2018-04-23 22:46:17 +01:00
6764362237 Hadrons: automatic directory creation fix 2018-04-23 18:45:39 +01:00
2fa2b0e0b1 Hadrons: Application header does not include all the modules 2018-04-23 17:57:17 +01:00
b61292f735 Hadrons: recursive mkdir function 2018-04-23 17:36:43 +01:00
ce7720e221 Hadrons: copyright update 2018-04-23 17:36:20 +01:00
853a5528dc Hadrons: template modules compilation optimisation 2018-04-23 17:35:01 +01:00
169f405c9c Hadrons: tests repaired 2018-04-23 12:48:34 +01:00
c6125b01ce Hadrons: Error and Warning channels always on 2018-04-23 12:48:17 +01:00
b0b5b34bff Hadrons: custom abort with module trace 2018-04-23 12:48:00 +01:00
1c9722357d Merge branch 'develop' into feature/hadrons
# Conflicts:
#	lib/qcd/action/fermion/FermionOperator.h
2018-04-20 17:15:21 +01:00
141da3ae71 function to get tensor dimensions 2018-04-20 17:13:34 +01:00
94edf9cf8b HDF5: direct access to group for custom operations 2018-04-20 17:13:21 +01:00
c11a3ca0a7 vectorise/unvectorise in reverse order 2018-04-20 17:13:04 +01:00
870b1a85ae Think I have the physical prop interface to CF and PF overlap right, but need a strong check/regression.
Only support Hw overlap, not Ht for now. Ht needs a new Dminus implemented.
2018-04-18 14:17:49 +01:00
b5510427f9 physical fermion interface, cshift benchmark in SU3. 2018-04-18 01:43:29 +01:00
26ed65c8f8 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-04-17 12:03:32 +01:00
f7f043d8cf Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-04-17 10:57:18 +01:00
ddcaa6ad29 Master does header on Nersc 2018-04-17 10:48:33 +01:00
334da7f452 Hadrons: can trace which module is throwing an error 2018-04-13 18:45:31 +02:00
4669ecd4ba Hadrons: build improvement 2018-04-13 18:21:18 +02:00
4573b34cac Hadrons: scalar SU(N) 2-pt functions with momentum 2018-04-13 18:21:00 +02:00
17f57e85d1 Merge branch 'develop' into feature/hadrons 2018-04-06 22:53:11 +01:00
c8d4d184ee XML push fragment fix 2018-04-06 22:53:01 +01:00
17f27b1ebd Hadrons: eigenpack writer fix 2018-04-06 22:52:11 +01:00
a16bbecb8a Hadrons: more feedback 2018-04-06 19:38:20 +01:00
7c9b0dd842 Hadrons: top level name for eigenpack metadata 2018-04-06 19:32:22 +01:00
6b7228b3e6 Hadrons: better metadata for eigenpack 2018-04-06 19:29:53 +01:00
f117552334 post-merge fix 2018-04-06 18:38:46 +01:00
a21a160029 Merge branch 'develop' into feature/hadrons
# Conflicts:
#	lib/serialisation/XmlIO.cc
2018-04-06 18:34:19 +01:00
1569a374a9 XML interface polish, XML fragments can be pushed into a writer 2018-04-06 18:32:14 +01:00
eddf023b8a pugixml 1.9 update 2018-04-06 16:17:22 +01:00
6b8ffbe735 Hadrons: genetic minimum value type fix 2018-04-06 15:41:31 +01:00
81050535a5 Hadrons: truncate eigenvalues when loading partial eigenpack 2018-04-06 13:48:58 +01:00
7dcf5c90e3 Hadrons: eigenpack must be referred by solver when used 2018-04-06 13:16:28 +01:00
9ce00f26f9 not special characters in std::vector operator<< 2018-04-04 17:44:56 +01:00
85c253ed4a Test_serialisation MPI fix 2018-04-04 17:19:34 +01:00
ccfc0a5a89 Hadrons: better string representation of module parameters 2018-04-04 17:19:22 +01:00
d3f857b1c9 Hadrons: proper metadata for eigenpacks 2018-04-04 16:36:37 +01:00
fb62035aa0 Hadrons: do not create RB coarse grids 2018-04-03 19:49:11 +01:00
0260bc7705 Hadrons: eigen pack writing only for boss node 2018-04-03 18:55:46 +01:00
68e6a58f12 Hadrons: several Lanczos fixes and improvements 2018-04-03 17:42:21 +01:00
640515e3d8 Merge branch 'develop' into feature/hadrons 2018-03-30 17:43:49 +01:00
f089bf5629 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-03-30 16:17:26 +01:00
276f113f28 IO uses master boss node for metadata. 2018-03-30 16:17:05 +01:00
97c579f637 Merge branch 'develop' into feature/hadrons 2018-03-30 16:04:44 +01:00
a13c109111 deterministic initialisation of field metadata 2018-03-30 16:03:01 +01:00
ab6afd18ac Still compile if no LIME 2018-03-30 13:39:20 +01:00
5bde64d48b Barrier required in parallel when we use ftell 2018-03-30 12:41:30 +01:00
2f5add4d5f Creation of file 2018-03-30 12:30:58 +01:00
c5a885dcd6 I/O benchmark 2018-03-29 19:57:41 +01:00
a4d8512fb8 Revert "Lattice serialisation, just HDF5 for the moment"
This reverts commit 8a0cf0194f.
2018-03-27 17:55:42 +01:00
5ec903044d Serial IO code cleaning for std:: convention 2018-03-27 17:11:50 +01:00
8a0cf0194f Lattice serialisation, just HDF5 for the moment 2018-03-26 19:16:16 +01:00
1c680d4b7a Merge branch 'develop' into feature/hadrons 2018-03-26 13:52:44 +01:00
c9c073eee4 Changes in messages in test dwf mixedprec 2018-03-23 11:27:56 +00:00
f290b2e908 Fix to pass CI tests 2018-03-23 11:14:23 +00:00
5f8225461b Fencing mixedcg test propagator write. LIME is still optional in Grid 2018-03-23 10:37:58 +00:00
e9323460c7 Merge branch 'develop' into feature/hadrons 2018-03-22 10:48:37 +00:00
20e186a1e0 Merge pull request #158 from goracle/dev-pull
Make compilation faster by moving print of git hash.
2018-03-22 10:45:17 +00:00
6ef4af989b Merge pull request #159 from goracle/dev-precsafe
Add dimension check to precisionChange.
2018-03-22 10:41:53 +00:00
ccde8b817f Add dimension check to precisionChange. 2018-03-21 20:58:04 -04:00
68168bf72d Revert "Add dimension match check to precisionChange."
This reverts commit 8f601d9b39.
2018-03-21 20:51:38 -04:00
e93d0feaa7 Merge branch 'dev-pull' of github.com:goracle/Grid into dev-pull 2018-03-21 20:39:30 -04:00
8f601d9b39 Add dimension match check to precisionChange. 2018-03-21 20:38:19 -04:00
5436308e4a Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-03-21 14:26:29 +00:00
07fe7d0cbe Save file in current dir; print checksums 2018-03-21 14:26:04 +00:00
60b57706c4 Small bug fix in the shm file names 2018-03-21 13:57:30 +00:00
58c2f60b69 Merge branch 'feature/hadrons' into feature/qed-fvol 2018-03-20 20:19:18 +00:00
bfa3a7b3b0 Merge branch 'feature/hadrons' into feature/qed-fvol
# Conflicts:
#	extras/Hadrons/Modules.hpp
#	extras/Hadrons/Modules/MGauge/StochEm.cc
#	extras/Hadrons/modules.inc
2018-03-20 20:17:59 +00:00
954e38bebe Put a username in the path 2018-03-20 18:16:15 +00:00
b1a38bde7a Extra test for Gparity with plaquette action 2018-03-20 18:01:32 +00:00
2581875edc Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-03-19 18:00:08 +00:00
f212b0a963 Merge branch 'feature/hadrons' of https://github.com/paboyle/Grid into feature/hadrons 2018-03-19 17:57:13 +00:00
62702dbcb8 Fixing bug in the Point sink causing NaNs 2018-03-19 17:56:53 +00:00
41d6cab033 Merge branch 'develop' into feature/hadrons 2018-03-19 13:30:21 +00:00
5a31e747c9 Merge commit 'd5ce66f6ab2c44a12def7b6d26df80d6e646b1fb' into feature/hadrons 2018-03-19 13:19:09 +00:00
cbc73a3fd1 Hadrons: CG guesser fix 2018-03-19 13:11:38 +00:00
6c6d43eb4e Drop RB on coarse space ; that was a mistake 2018-03-17 09:35:01 +00:00
e1dcfd3553 typo fix 2018-03-16 23:10:47 +00:00
888838473a 4GB clean the offsets in parallel IO for multifile records 2018-03-16 21:54:56 +00:00
01568b0e62 Add a new SHM option 2018-03-16 21:54:28 +00:00
d5ce66f6ab Extra SHM option 2018-03-16 21:37:03 +00:00
d86936a3de Eliminating deprecated lex_sites 2018-03-16 12:26:39 +00:00
d516938707 Hadrons: eigen packs I/O and deflation interface 2018-03-14 14:55:47 +00:00
72344d1418 Hadrons: change default Schur convention to DiagTwo 2018-03-13 17:10:54 +00:00
7ecf6ab38b Merge branch 'develop' into feature/hadrons 2018-03-13 16:11:59 +00:00
2d4d70d3ec Hadrons: LCL fixes 2018-03-13 16:10:36 +00:00
78f8d47528 Hadrons: environment access to derived objects 2018-03-13 16:10:16 +00:00
b85f987b0b Hadrons: error message channel verbose during profiling 2018-03-13 16:09:22 +00:00
f57afe2079 Hadrons: much cleaner eigenpack implementation, to be tested 2018-03-13 13:51:09 +00:00
0fb84fa34b Make compilation faster by moving print of git hash. 2018-03-12 17:03:48 -04:00
8462bbfe63 Gamma input for meson contraction with round brackets 2018-03-12 18:02:12 +00:00
229977c955 Hadrons: minor memory fix for ShiftProbe module 2018-03-09 21:56:27 +00:00
e485a07133 Hadrons: garbage collector debug output 2018-03-09 21:56:01 +00:00
0880747edb Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-03-09 20:44:42 +00:00
b801e1fcd6 fclose should be called through a call to close() 2018-03-09 20:44:10 +00:00
70ec2faa98 Hadrons: maximum iteration specified for tests and error if 0 2018-03-09 19:53:55 +00:00
2f849ee252 declaration fix 2018-03-08 23:34:00 +00:00
bb6ed44339 Merge branch 'develop' into feature/hadrons 2018-03-08 23:09:28 +00:00
360cface33 Grid tensor serialisation fully implemented and tested 2018-03-08 19:12:03 +00:00
80302e95a8 MILC Interface 2018-03-08 15:34:03 +00:00
caf2f6b274 Merge branch 'develop' of github.com:paboyle/Grid into develop 2018-03-08 09:52:25 +00:00
c49be8988b Grid tensor serialisation 2018-03-08 09:51:22 +00:00
971c2379bd std::vector to tensor conversion + test units 2018-03-08 09:50:39 +00:00
94b0d66e4c Merge pull request #157 from goracle/dev-pull
Add print of the current git hash on Grid init.
2018-03-08 16:09:28 +09:00
5e8af396fd Add print of the current git hash on Grid init. 2018-03-07 13:11:51 -05:00
9942723189 Merge branch 'develop' into feature/hadrons
# Conflicts:
#	lib/serialisation/BaseIO.h
2018-03-07 15:22:16 +00:00
a7d19dbb64 Merge branch 'develop' of github.com:paboyle/Grid into develop
# Conflicts:
#	lib/serialisation/BaseIO.h
2018-03-07 15:13:54 +00:00
90dbe03e17 Conversion of Grid tensors to std::vector made more elegant, also pair syntax changed to (x y) to avoid issues with JSON/XML 2018-03-07 15:12:32 +00:00
8b14096990 Conversion of Grid tensors to std::vector made more elegant, also pair syntax changed to (x y) to avoid issues with JSON/XML 2018-03-07 15:12:18 +00:00
b938202081 Overlapped Comm for Wilson DhopInternal 2018-03-07 14:08:43 +00:00
e79ef469ac Merge branch 'develop' into feature/hadrons
# Conflicts:
#	lib/serialisation/BaseIO.h
2018-03-06 19:25:51 +00:00
485c5db0fe conversion of Grid tensors to nested std::vector in preparation for tensor serialisation 2018-03-06 19:22:03 +00:00
c793947209 Add overloaded Photon constructors, with default parameters for IR improvements and infinite-volume G(x=0). 2018-03-06 16:27:26 +00:00
3e9ee053a1 Merge branch 'develop' into feature/hadrons 2018-03-05 20:01:38 +00:00
dda6c69d5b Hadrons: scalar SU(N) shift probes 2018-03-05 20:00:29 +00:00
cd51b9af99 Torture yourself with namespace lookup 101 2018-03-05 19:58:13 +00:00
c399c2b44d Guido broke the charge conjugate plaquette action with premature optimisation.
This sector of the code does not matter for anything other than Guido's quenched HMC
studies, and any plaq specific optimisations should be retained in a private branch
instead of destroying the code simplicity.
2018-03-05 12:55:41 +00:00
af7de7a294 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-03-05 12:22:41 +00:00
1dc86efd26 Finalize protection 2018-03-05 12:22:18 +00:00
f32555dcc5 Merge branch 'develop' into feature/hadrons 2018-03-03 15:31:52 +00:00
30391cb2eb Merge pull request #155 from fionnoh/develop
Some changes needed for deflation interface
2018-03-03 13:43:59 +00:00
e93c883470 Hadrons: basic GraphViz visualisation 2018-03-03 13:42:36 +00:00
2e88408f5c Some changes needed for deflation interface 2018-03-02 22:27:41 +00:00
fcac5c0772 Hadrons: scalar SU(N) fixes 2018-03-02 19:20:23 +00:00
90f4000935 Hadrons: scheduler debug less verbose 2018-03-02 19:20:01 +00:00
480708b9a0 Hadrons: safer error handling for HadronsXmlRun 2018-03-02 19:19:37 +00:00
c4baf876d4 Hadrons: graph consistency check 2018-03-02 18:40:18 +00:00
2f4dac3531 Hadrons: legal update 2018-03-02 18:10:58 +00:00
3ec6890850 Merge branch 'feature/hadrons' of github.com:paboyle/Grid into feature/hadrons 2018-03-02 17:56:08 +00:00
018801d973 Hadrons: legal update 2018-03-02 17:56:00 +00:00
1d83521daa Hadrons: scalar SU(N) EMT 2018-03-02 17:55:18 +00:00
fc5670c6a4 Merge pull request #151 from guelpers/feature/hadrons
Feature/hadrons
2018-03-02 17:54:43 +00:00
d9c435e282 Hadrons: Scalar SU(N) transverse projection module 2018-03-02 17:35:12 +00:00
614a0e8277 Hadrons: Scalar SU(N) utility functions 2018-03-02 17:34:23 +00:00
aaf39222c3 update my fork and fixed conflicts 2018-03-02 17:08:08 +00:00
550142bd6a Hadrons: more code cleaning 2018-03-02 14:30:45 +00:00
c0a929aef7 Hadrons: code cleaning 2018-03-02 14:29:54 +00:00
37fe944224 Hadrons: scalar kinetic term 2018-03-02 14:14:11 +00:00
315a42843f changes requested for the pull request 2018-03-02 11:47:38 +00:00
83a101db83 Hadrons: more LCL fixes 2018-03-02 11:05:02 +00:00
c4274e1660 Hadrons: LCL cleaning 2018-03-02 10:18:33 +00:00
ba6db55cb0 Hadrons: reverse last commit 2018-03-01 23:30:58 +00:00
e5ea84d531 Hadrons: LCL: orthogonalise coarse evec 2018-03-01 19:33:11 +00:00
15767a1491 Hadrons: LCL fine convergence test 2018-03-01 18:04:08 +00:00
4d2a32ae7a Hadrons: z-Mobius message fix 2018-03-01 18:03:44 +00:00
5b937e3644 Hadrons: VM memory profiling fix 2018-03-01 17:28:38 +00:00
e418b044f7 Hadrons: code cleaning 2018-03-01 12:57:28 +00:00
b8b05f143f Hadrons: Lanczos more conservative type names 2018-03-01 12:53:16 +00:00
6ec42b4b82 LCL: external storage fix 2018-03-01 12:27:29 +00:00
abb7d4d2f5 Hadrons: z-Mobius action 2018-02-27 19:32:19 +00:00
16ebbfff29 Hadrons: Schur convention globally defined through a macro 2018-02-27 18:45:23 +00:00
4828226095 Hadrons: prettier log 2018-02-27 14:43:51 +00:00
8a049f27b8 Hadrons: Lanczos code improvement 2018-02-27 13:46:59 +00:00
43578a3eb4 Hadrons: copyright update 2018-02-26 19:24:19 +00:00
fdbd42e542 Hadrons: first implementation of local coherence Lanczos 2018-02-26 19:22:43 +00:00
e7e4cee4f3 Merge branch 'develop' into feature/hadrons 2018-02-26 15:05:05 +00:00
ec3954ff5f QedFVol: Add input parameter G(x=0) for infinite-volume photon 2018-02-23 14:53:05 +00:00
0f468e2179 OverlappedComm for Staggered 5D and 4D. 2018-02-22 12:50:09 +00:00
8e61286741 Merge branch 'develop' into feature/qed-fvol 2018-02-20 15:33:35 +00:00
4790e99817 Extra communicator free that I had missed.
Hard to audit them all as this is complex
2018-02-20 15:12:31 +00:00
2dd63aa7a4 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-02-20 14:29:26 +00:00
559a501140 Deflation interface for solvers 2018-02-20 14:29:08 +00:00
945684c470 updates for deflation in the RB solver 2018-02-20 14:28:38 +00:00
e30a80a234 Relaxed constraints on MPI thread mode when not using multiple comms threads 2018-02-15 17:13:36 +00:00
69e4ecc1d2 QedFVol: Fix single precision build error 2018-02-14 17:37:18 +00:00
5f483df16b Merge branch 'develop' into feature/qed-fvol 2018-02-14 16:35:04 +00:00
4680a977c3 QedFVol: set infinite-volume photon propagator to 1 at x=0,
so that momentum-spage photon propagator is non-negative.
Need to check whether this is sufficient for all volumes.
2018-02-14 16:30:09 +00:00
de42456171 updated my fork and conflicts fixed 2018-02-14 13:57:56 +00:00
d55212c998 restructure SeqConservedCurrent for DWF to need less memory 2018-02-14 10:45:18 +00:00
c96483e3bd Whitespace only change 2018-02-13 11:39:07 +00:00
c6e1f64573 Test for QED 2018-02-13 09:30:23 +00:00
ae31a6a760 Move deflate to right class 2018-02-13 02:11:37 +00:00
dd8f2a64fe INterface to suit hadrons on Lanczos 2018-02-13 02:08:49 +00:00
724cf02d4a QedFVol: Implement infinite-volume photon 2018-02-12 17:18:10 +00:00
7b8b2731e7 Conj error for complex coeffs 2018-02-12 16:06:31 +00:00
237a8ec918 Communicator leak fixed (I think) 2018-02-12 13:27:20 +00:00
49a0ae73eb Insertion of photon field in seqential conserved current 2018-02-12 09:36:08 +00:00
315f1146cd QedFVol: Fix output of VPCounterTerms module. 2018-02-08 20:40:45 +00:00
9f202782c5 QedFVol: Change format of scalar VP output files, and save diagrams without charge factors for consistency with ChargedProp module. 2018-02-07 20:31:50 +00:00
594a262dcc QedFVol: Remove redundant file Communicator_mpi.cc 2018-02-07 11:37:01 +00:00
7f8ca54285 Merge branch 'develop' into feature/qed-fvol 2018-02-07 10:11:00 +00:00
c5b23c367e QedFVol: Fix segmentation fault when multiple propagator modules are used. 2018-02-05 11:46:33 +00:00
b6fe03eb26 BugFix: Now the stochatic EM potential weight is generated when calling for the first time 2018-02-02 15:29:38 +00:00
f37ed4958b Implement IR improvement, with coefficients set in input file. 2018-02-02 11:56:51 +00:00
896f3a8002 Fix to MPI for Hokusai system 2018-02-01 18:51:51 +00:00
5f85473d6b QedFVol: Move Projection class into Result class 2018-02-01 16:16:13 +00:00
ac3b0ebc58 QedFVol: New structure for ChargedProp output files 2018-02-01 12:31:32 +00:00
f0fcdf75b5 Update README.md 2018-01-30 12:44:20 +01:00
53bffb83d4 Updating README with new SKL target 2018-01-30 12:42:36 +01:00
cd44e851f1 Fixing compilation error in FundtoHirep 2018-01-30 06:04:30 +01:00
fb24e3a7d2 Adding utilities for perf profiling 2018-01-29 11:11:45 +01:00
655a69259a Added support for GCC compilation for Skylake AVX512 2018-01-28 17:02:46 +01:00
4e0cf0cc28 QedFVol: Fix bug in ScalarVP.cc due to double use of temporary object. Still getting mpi3 errors when configured with enable-comms=mpi[-auto]. 2018-01-27 15:15:25 +00:00
507c4e9efc Correcting an missing semicolumn in avx512 2018-01-27 10:59:55 +01:00
cdf550845f QedFVol: Fix bugs in StochEm.cc and ChargedProp.cc (still only works without MPI). 2018-01-26 21:25:20 +00:00
3db7a5387b BROKEN: Adapted scalarVP, UnitEm and VPCounterTerms modules to new Hadrons. Currently getting an assertion error from Communicator_mpi3.cc when I try to run. 2018-01-26 16:33:48 +00:00
f8a5194c70 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2018-01-25 13:46:37 +01:00
cff3bae155 Adding support for general Nc in the benchmark outputs 2018-01-25 13:46:31 +01:00
90dffc73c8 Merge branch 'feature/hadrons' into feature/qed-fvol
# Conflicts:
#	extras/Hadrons/Modules.hpp
#	extras/Hadrons/Modules/MGauge/StochEm.cc
#	extras/Hadrons/Modules/MScalar/ChargedProp.cc
#	extras/Hadrons/Modules/MScalar/ChargedProp.hpp
#	extras/Hadrons/modules.inc
#	lib/communicator/Communicator_mpi.cc
2018-01-24 16:41:44 +00:00
a1151fc734 Hadrons: MPI-safe serial IO 2018-01-23 17:26:50 +00:00
ab3baeb38f Implement contractions and data output in functions; calculate diagrams S, X and 4C separately; output 2E and 2T instead of sunset_shifted, sunset_unshifted, tadpole_shifted, tadpole_unshifted; add comments. 2018-01-23 17:07:45 +00:00
389731d373 changed SeqConservedSummed.hpp to work with new hadrons interface 2018-01-23 10:11:33 +00:00
6e3ce7423e Hadrons: don't display module list at startup (too long) 2018-01-22 20:04:05 +00:00
15f15a7cfd Merge branch 'develop' into feature/hadrons
# Conflicts:
#	extras/Hadrons/Modules.hpp
#	extras/Hadrons/modules.inc
2018-01-22 20:03:36 +00:00
0e5f626226 Hadrons: module for scalar operator divergence 2018-01-22 19:38:19 +00:00
97b9c6f03d No option for interior/exterior split of asm kernels since different directions get interleaved 2018-01-22 11:04:19 +00:00
63982819c6 No option to overlap comms and compute for asm implementation since different directions are interleaved
in the kernels, introducing if else structure would be too painful
2018-01-22 11:03:39 +00:00
6fec507bef merged new hadrons interface 2018-01-22 10:09:20 +00:00
219b3bd34f Remove freeVpTensor object 2018-01-19 17:14:11 +00:00
24162c9ead Staggered overlap comms comput 2018-01-09 13:02:52 +00:00
935cd1e173 conserved current insertion summed over Lorentzindex 2017-12-22 11:38:45 +00:00
55e39df30f tadpole insertion for DWF 2017-12-22 11:36:31 +00:00
581be32ed2 Implement infrared improvement for v=0 on-shell self-energy 2017-12-14 13:42:41 +00:00
6bc136b1d0 Add module for calculating diagrams required for HVP counter-terms 2017-12-13 17:31:01 +00:00
0c668bf46a QedFVol: Write to output files from one process only. 2017-11-07 14:46:39 +00:00
840814c776 QedFVol: Patch to fix MPI communicators error 2017-11-06 16:34:55 +00:00
95af55128e QedFVol: Redo optimisation of scalar VP (extra memory requirements were not the problem), and undo optimisation of charged propagator (which seemed to be causing HDF5 errors, although I don’t know why). 2017-11-03 18:46:16 +00:00
9f2a57e334 QedFVol: Undo optimisation of scalar VP, to reduce memory requirements 2017-11-03 13:10:11 +00:00
c645d33db5 QedFVol: Redo optimisation of charged propagator, and fix I/O bug 2017-11-03 10:59:26 +00:00
e0f1349524 QedFVol: Undo optimisation of charged propagator 2017-11-03 09:22:41 +00:00
79b761f923 Merge branch 'develop' into feature/qed-fvol
# Conflicts:
#	lib/communicator/Communicator_base.cc
2017-10-30 15:53:18 +00:00
0d4e31ca58 QedFVol: Calculate phase factors for momentum projections once per configuration only. 2017-10-30 15:46:50 +00:00
b07a354a33 QedFVol: output scalar propagator before FFT in spatial directions. 2017-10-30 14:20:44 +00:00
c433939795 QedFVol: Temporarily remove incomplete implementation of infinite-volume photon 2017-10-20 16:27:58 +01:00
b6a4c31b48 Merge branch 'feature/qed-fvol' of https://github.com/jch1g10/Grid into feature/qed-fvol 2017-10-20 16:25:07 +01:00
98b1439ff9 QedFVol: pass arbitrary input values to photon constructor in UnitEm 2017-10-20 16:24:09 +01:00
564738b1ff Add module for unit EM field 2017-10-17 14:03:57 +01:00
a80e43dbcf Added infinite-volume photon in Photon.h (not checked yet) 2017-10-11 16:44:51 -04:00
b99622d9fb QedFVol: fix problem with JSON wanting gcc 4.9 2017-09-28 13:34:33 -04:00
937c77ead2 Merge branch 'develop' into feature/qed-fvol 2017-09-28 16:25:20 +01:00
95e5a2ade3 Merge pull request #116 from jch1g10/feature/qed-fvol
Feature/qed fvol
2017-09-25 15:08:33 +01:00
91676d1dda Fix “MAP_ANONYMOUS undefined” error on OSX. 2017-09-01 15:48:30 +01:00
ac3611bb19 Merge branch 'develop' of https://github.com/paboyle/Grid into feature/qed-fvol 2017-08-29 11:53:37 +01:00
cc4afb978d Fix bug in non-zero momentum projection 2017-08-24 17:31:44 +01:00
20e92a7009 QedVFol: Allow output of scalar propagator and vacuum polarisation projected to arbitrary lattice momentum, not just zero-momentum. 2017-06-12 18:27:32 +01:00
42f0afcbfa QedFVol: Output all scalar VP diagrams separately 2017-06-09 18:08:40 +01:00
20ac13fdf3 QedFVol: add ChargedProp as an input to ScalarVP module, instead of calculating scalar propagator within ScalarVP. 2017-06-08 17:43:39 +01:00
e38612e6fa QedFVol: Update ScalarVP module for compatibility with new scalar action 2017-06-07 17:42:00 +01:00
c2b2b71c5d Merge branch 'feature/qed-fvol' of https://github.com/paboyle/Grid into feature/qed-fvol
# Conflicts:
#	extras/Hadrons/Modules.hpp
#	extras/Hadrons/modules.inc
2017-06-07 16:59:47 +01:00
009f48a904 QedFVol: Add missing factor of 2 in free vacuum polarisation 2017-06-07 16:34:09 +01:00
5cfc0180aa QedFVol: Output free VP along with charged VP. 2017-05-09 12:46:57 +01:00
914f180fa3 QedFVol: Implement exact O(alpha) vacuum polarisation. 2017-05-09 11:46:25 +01:00
6cb563a40c QedFVol: Access HVP tensor using a vector<vector<ScalarField>> instead of vector<vector<ScalarField*>> 2017-05-05 17:12:41 +01:00
db3837be22 QedFVol: Change “double” to “Real” in ScalarVP.cc 2017-05-03 13:26:49 +01:00
2f0dd83016 Calculate HVP using a single contraction of O(alpha) charged propagators. 2017-05-03 12:53:41 +01:00
3ac27e5596 QedFVol: remove unnecessary copies of free propagator from shifted sources in ScalarVP 2017-04-27 14:17:50 +01:00
bd466a55a8 QedFVol: remove charge dependence in chargedProp function of ScalarVP 2017-04-25 10:04:03 +01:00
c8e6f58e24 Fix typos in ScalarVP 2017-04-13 17:04:37 +01:00
888988ad37 Merge branch 'feature/qed-fvol' of https://github.com/paboyle/Grid into feature/qed-fvol
# Conflicts:
#	lib/qcd/action/fermion/Fermion.h
2017-04-13 15:54:40 +01:00
e4a105a30b Merge branch 'feature/qed-fvol' of https://github.com/paboyle/Grid into feature/qed-fvol 2017-04-10 16:35:01 +01:00
26ebe41fef QedFVol: Implement charged propagator calculation within ScalarVP module 2017-04-10 16:33:54 +01:00
1e496fee74 Merge branch 'develop' into feature/qed-fvol
# Conflicts:
#	lib/qcd/action/fermion/Fermion.h
2017-04-03 19:02:57 +01:00
9f755e0379 Add functions momD1 and momD2 to ScalarVP 2017-03-27 16:49:18 +01:00
4512dbdf58 Rename module ScalarFV to ScalarVP 2017-03-27 15:02:16 +01:00
483fd3cfa1 Add propagator expansion terms as inputs to ScalarFV 2017-03-27 13:24:51 +01:00
85516e9c7c Output all terms of scalar propagator separately 2017-03-24 17:13:55 +00:00
0c006fbfaa Add ScalarFV inputs to ScalarFV.hpp 2017-03-24 11:59:09 +00:00
54c10a42cc Add source and emField inputs to ScalarFV module 2017-03-24 11:42:32 +00:00
ef0fe2bcc1 Added empty ScalarFV module 2017-03-21 11:39:46 +00:00
307 changed files with 20407 additions and 5266 deletions

1
.gitignore vendored
View File

@ -123,6 +123,7 @@ make-bin-BUCK.sh
#####################
lib/qcd/spin/gamma-gen/*.h
lib/qcd/spin/gamma-gen/*.cc
lib/version.h
# vs code editor files #
########################

View File

@ -19,6 +19,8 @@ before_install:
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew install libmpc; fi
install:
- export CWD=`pwd`
- echo $CWD
- export CC=$CC$VERSION
- export CXX=$CXX$VERSION
- echo $PATH
@ -36,11 +38,22 @@ script:
- ./bootstrap.sh
- mkdir build
- cd build
- ../configure --enable-precision=single --enable-simd=SSE4 --enable-comms=none
- mkdir lime
- cd lime
- mkdir build
- cd build
- wget http://usqcd-software.github.io/downloads/c-lime/lime-1.3.2.tar.gz
- tar xf lime-1.3.2.tar.gz
- cd lime-1.3.2
- ./configure --prefix=$CWD/build/lime/install
- make -j4
- make install
- cd $CWD/build
- ../configure --enable-precision=single --enable-simd=SSE4 --enable-comms=none --with-lime=$CWD/build/lime/install
- make -j4
- ./benchmarks/Benchmark_dwf --threads 1 --debug-signals
- echo make clean
- ../configure --enable-precision=double --enable-simd=SSE4 --enable-comms=none
- ../configure --enable-precision=double --enable-simd=SSE4 --enable-comms=none --with-lime=$CWD/build/lime/install
- make -j4
- ./benchmarks/Benchmark_dwf --threads 1 --debug-signals
- make check

View File

@ -5,6 +5,10 @@ include $(top_srcdir)/doxygen.inc
bin_SCRIPTS=grid-config
BUILT_SOURCES = version.h
version.h:
echo "`git log -n 1 --format=format:"#define GITHASH \\"%H:%d\\"%n" HEAD`" > $(srcdir)/lib/version.h
.PHONY: bench check tests doxygen-run doxygen-doc $(DX_PS_GOAL) $(DX_PDF_GOAL)

View File

@ -187,10 +187,11 @@ Alternatively, some CPU codenames can be directly used:
| `<code>` | Description |
| ----------- | -------------------------------------- |
| `KNL` | [Intel Xeon Phi codename Knights Landing](http://ark.intel.com/products/codename/48999/Knights-Landing) |
| `SKL` | [Intel Skylake with AVX512 extensions](https://ark.intel.com/products/codename/37572/Skylake#@server) |
| `BGQ` | Blue Gene/Q |
#### Notes:
- We currently support AVX512 only for the Intel compiler. Support for GCC and clang will appear in future versions of Grid when the AVX512 support within GCC and clang will be more advanced.
- We currently support AVX512 for the Intel compiler and GCC (KNL and SKL target). Support for clang will appear in future versions of Grid when the AVX512 support in the compiler will be more advanced.
- For BG/Q only [bgclang](http://trac.alcf.anl.gov/projects/llvm-bgq) is supported. We do not presently plan to support more compilers for this platform.
- BG/Q performances are currently rather poor. This is being investigated for future versions.
- The vector size for the `GEN` target can be specified with the `configure` script option `--enable-gen-simd-width`.

View File

@ -1,4 +1,4 @@
Version : 0.7.0
Version : 0.8.0
- Clang 3.5 and above, ICPC v16 and above, GCC 6.3 and above recommended
- MPI and MPI3 comms optimisations for KNL and OPA finished

108
benchmarks/Benchmark_IO.cc Normal file
View File

@ -0,0 +1,108 @@
#include <Grid/Grid.h>
#ifdef HAVE_LIME
using namespace std;
using namespace Grid;
using namespace Grid::QCD;
#define MSG cout << GridLogMessage
#define SEP \
"============================================================================="
#ifndef BENCH_IO_LMAX
#define BENCH_IO_LMAX 40
#endif
typedef function<void(const string, LatticeFermion &)> WriterFn;
typedef function<void(LatticeFermion &, const string)> ReaderFn;
string filestem(const int l)
{
return "iobench_l" + to_string(l);
}
void limeWrite(const string filestem, LatticeFermion &vec)
{
emptyUserRecord record;
ScidacWriter binWriter(vec._grid->IsBoss());
binWriter.open(filestem + ".bin");
binWriter.writeScidacFieldRecord(vec, record);
binWriter.close();
}
void limeRead(LatticeFermion &vec, const string filestem)
{
emptyUserRecord record;
ScidacReader binReader;
binReader.open(filestem + ".bin");
binReader.readScidacFieldRecord(vec, record);
binReader.close();
}
void writeBenchmark(const int l, const WriterFn &write)
{
auto mpi = GridDefaultMpi();
auto simd = GridDefaultSimd(Nd, vComplex::Nsimd());
vector<int> latt = {l*mpi[0], l*mpi[1], l*mpi[2], l*mpi[3]};
unique_ptr<GridCartesian> gPt(SpaceTimeGrid::makeFourDimGrid(latt, simd, mpi));
GridCartesian *g = gPt.get();
GridParallelRNG rng(g);
LatticeFermion vec(g);
emptyUserRecord record;
ScidacWriter binWriter(g->IsBoss());
cout << "-- Local volume " << l << "^4" << endl;
random(rng, vec);
write(filestem(l), vec);
}
void readBenchmark(const int l, const ReaderFn &read)
{
auto mpi = GridDefaultMpi();
auto simd = GridDefaultSimd(Nd, vComplex::Nsimd());
vector<int> latt = {l*mpi[0], l*mpi[1], l*mpi[2], l*mpi[3]};
unique_ptr<GridCartesian> gPt(SpaceTimeGrid::makeFourDimGrid(latt, simd, mpi));
GridCartesian *g = gPt.get();
LatticeFermion vec(g);
emptyUserRecord record;
ScidacReader binReader;
cout << "-- Local volume " << l << "^4" << endl;
read(vec, filestem(l));
}
int main (int argc, char ** argv)
{
Grid_init(&argc,&argv);
auto simd = GridDefaultSimd(Nd,vComplex::Nsimd());
auto mpi = GridDefaultMpi();
int64_t threads = GridThread::GetThreads();
MSG << "Grid is setup to use " << threads << " threads" << endl;
MSG << SEP << endl;
MSG << "Benchmark Lime write" << endl;
MSG << SEP << endl;
for (int l = 4; l <= BENCH_IO_LMAX; l += 2)
{
writeBenchmark(l, limeWrite);
}
MSG << "Benchmark Lime read" << endl;
MSG << SEP << endl;
for (int l = 4; l <= BENCH_IO_LMAX; l += 2)
{
readBenchmark(l, limeRead);
}
Grid_finalize();
return EXIT_SUCCESS;
}
#else
int main (int argc, char ** argv)
{
return EXIT_SUCCESS;
}
#endif

View File

@ -158,8 +158,10 @@ public:
dbytes=0;
ncomm=0;
parallel_for(int dir=0;dir<8;dir++){
#ifdef GRID_OMP
#pragma omp parallel for num_threads(Grid::CartesianCommunicator::nCommThreads)
#endif
for(int dir=0;dir<8;dir++){
double tbytes;
int mu =dir % 4;
@ -175,9 +177,14 @@ public:
int comm_proc = mpi_layout[mu]-1;
Grid.ShiftedRanks(mu,comm_proc,xmit_to_rank,recv_from_rank);
}
#ifdef GRID_OMP
int tid = omp_get_thread_num();
#else
int tid = dir;
#endif
tbytes= Grid.StencilSendToRecvFrom((void *)&xbuf[dir][0], xmit_to_rank,
(void *)&rbuf[dir][0], recv_from_rank,
bytes,dir);
bytes,tid);
#ifdef GRID_OMP
#pragma omp atomic

View File

@ -169,7 +169,11 @@ int main (int argc, char ** argv)
for(int lat=4;lat<=maxlat;lat+=4){
for(int Ls=8;Ls<=8;Ls*=2){
std::vector<int> latt_size ({lat,lat,lat,lat});
std::vector<int> latt_size ({lat*mpi_layout[0],
lat*mpi_layout[1],
lat*mpi_layout[2],
lat*mpi_layout[3]});
GridCartesian Grid(latt_size,simd_layout,mpi_layout);
RealD Nrank = Grid._Nprocessors;
@ -446,7 +450,7 @@ int main (int argc, char ** argv)
}
#ifdef GRID_OMP
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << "= Benchmarking threaded STENCIL halo exchange in "<<nmu<<" dimensions"<<std::endl;
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
@ -485,7 +489,8 @@ int main (int argc, char ** argv)
dbytes=0;
ncomm=0;
parallel_for(int dir=0;dir<8;dir++){
#pragma omp parallel for num_threads(Grid::CartesianCommunicator::nCommThreads)
for(int dir=0;dir<8;dir++){
double tbytes;
int mu =dir % 4;
@ -502,9 +507,9 @@ int main (int argc, char ** argv)
int comm_proc = mpi_layout[mu]-1;
Grid.ShiftedRanks(mu,comm_proc,xmit_to_rank,recv_from_rank);
}
int tid = omp_get_thread_num();
tbytes= Grid.StencilSendToRecvFrom((void *)&xbuf[dir][0], xmit_to_rank,
(void *)&rbuf[dir][0], recv_from_rank, bytes,dir);
(void *)&rbuf[dir][0], recv_from_rank, bytes,tid);
#pragma omp atomic
dbytes+=tbytes;
@ -532,7 +537,7 @@ int main (int argc, char ** argv)
}
}
#endif
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << "= All done; Bye Bye"<<std::endl;
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;

View File

@ -48,7 +48,6 @@ int main (int argc, char ** argv)
int threads = GridThread::GetThreads();
std::cout<<GridLogMessage << "Grid is setup to use "<<threads<<" threads"<<std::endl;
std::vector<int> latt4 = GridDefaultLatt();
int Ls=16;
@ -57,6 +56,10 @@ int main (int argc, char ** argv)
std::stringstream ss(argv[i+1]); ss >> Ls;
}
GridLogLayout();
long unsigned int single_site_flops = 8*QCD::Nc*(7+16*QCD::Nc);
GridCartesian * UGrid = SpaceTimeGrid::makeFourDimGrid(GridDefaultLatt(), GridDefaultSimd(Nd,vComplex::Nsimd()),GridDefaultMpi());
GridRedBlackCartesian * UrbGrid = SpaceTimeGrid::makeFourDimRedBlackGrid(UGrid);
@ -187,7 +190,7 @@ int main (int argc, char ** argv)
FGrid->Barrier();
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=1344*volume*ncall;
double flops=single_site_flops*volume*ncall;
std::cout<<GridLogMessage << "Called Dw "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl;
// std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl;
@ -226,7 +229,7 @@ int main (int argc, char ** argv)
FGrid->Barrier();
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=1344*volume*ncall;
double flops=single_site_flops*volume*ncall;
std::cout<<GridLogMessage << "Called half prec comms Dw "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl;
std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl;
@ -277,7 +280,7 @@ int main (int argc, char ** argv)
double t1=usecond();
FGrid->Barrier();
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=1344*volume*ncall;
double flops=single_site_flops*volume*ncall;
std::cout<<GridLogMessage << "Called Dw s_inner "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl;
std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl;
@ -355,7 +358,7 @@ int main (int argc, char ** argv)
// sDw.stat.print();
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=(1344.0*volume*ncall)/2;
double flops=(single_site_flops*volume*ncall)/2.0;
std::cout<<GridLogMessage << "sDeo mflop/s = "<< flops/(t1-t0)<<std::endl;
std::cout<<GridLogMessage << "sDeo mflop/s per rank "<< flops/(t1-t0)/NP<<std::endl;
@ -478,7 +481,7 @@ int main (int argc, char ** argv)
FGrid->Barrier();
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=(1344.0*volume*ncall)/2;
double flops=(single_site_flops*volume*ncall)/2.0;
std::cout<<GridLogMessage << "Deo mflop/s = "<< flops/(t1-t0)<<std::endl;
std::cout<<GridLogMessage << "Deo mflop/s per rank "<< flops/(t1-t0)/NP<<std::endl;

View File

@ -51,6 +51,7 @@ int main (int argc, char ** argv)
{
Grid_init(&argc,&argv);
std::cout << GridLogMessage<< "*****************************************************************" <<std::endl;
std::cout << GridLogMessage<< "* Kernel options --dslash-generic, --dslash-unroll, --dslash-asm" <<std::endl;
std::cout << GridLogMessage<< "*****************************************************************" <<std::endl;
@ -107,6 +108,7 @@ void benchDw(std::vector<int> & latt4, int Ls, int threads,int report )
GridRedBlackCartesian * UrbGrid = SpaceTimeGrid::makeFourDimRedBlackGrid(UGrid);
GridCartesian * FGrid = SpaceTimeGrid::makeFiveDimGrid(Ls,UGrid);
GridRedBlackCartesian * FrbGrid = SpaceTimeGrid::makeFiveDimRedBlackGrid(Ls,UGrid);
long unsigned int single_site_flops = 8*QCD::Nc*(7+16*QCD::Nc);
std::vector<int> seeds4({1,2,3,4});
std::vector<int> seeds5({5,6,7,8});
@ -196,7 +198,7 @@ void benchDw(std::vector<int> & latt4, int Ls, int threads,int report )
if ( ! report ) {
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=1344*volume*ncall;
double flops=single_site_flops*volume*ncall;
std::cout <<"\t"<<NP<< "\t"<<flops/(t1-t0)<< "\t";
}
@ -228,7 +230,7 @@ void benchDw(std::vector<int> & latt4, int Ls, int threads,int report )
if(!report){
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=(1344.0*volume*ncall)/2;
double flops=(single_site_flops*volume*ncall)/2.0;
std::cout<< flops/(t1-t0);
}
}
@ -237,6 +239,7 @@ void benchDw(std::vector<int> & latt4, int Ls, int threads,int report )
#define CHECK_SDW
void benchsDw(std::vector<int> & latt4, int Ls, int threads, int report )
{
long unsigned int single_site_flops = 8*QCD::Nc*(7+16*QCD::Nc);
GridCartesian * UGrid = SpaceTimeGrid::makeFourDimGrid(latt4, GridDefaultSimd(Nd,vComplex::Nsimd()),GridDefaultMpi());
GridRedBlackCartesian * UrbGrid = SpaceTimeGrid::makeFourDimRedBlackGrid(UGrid);
@ -321,7 +324,7 @@ void benchsDw(std::vector<int> & latt4, int Ls, int threads, int report )
Counter.Report();
} else {
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=1344*volume*ncall;
double flops=single_site_flops*volume*ncall;
std::cout<<"\t"<< flops/(t1-t0);
}
@ -358,7 +361,7 @@ void benchsDw(std::vector<int> & latt4, int Ls, int threads, int report )
CounterSdw.Report();
} else {
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=(1344.0*volume*ncall)/2;
double flops=(single_site_flops*volume*ncall)/2.0;
std::cout<<"\t"<< flops/(t1-t0);
}
}

View File

@ -107,7 +107,7 @@ int main (int argc, char ** argv)
FGrid->Barrier();
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=2*1344*volume*ncall;
double flops=2*1320*volume*ncall;
std::cout<<GridLogMessage << "Called Dw "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl;
// std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl;
@ -134,7 +134,7 @@ int main (int argc, char ** argv)
FGrid->Barrier();
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=2*1344*volume*ncall;
double flops=2*1320*volume*ncall;
std::cout<<GridLogMessage << "Called half prec comms Dw "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl;
std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl;
@ -174,7 +174,7 @@ int main (int argc, char ** argv)
FGrid_d->Barrier();
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=2*1344*volume*ncall;
double flops=2*1320*volume*ncall;
std::cout<<GridLogMessage << "Called Dw "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl;
// std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl;

View File

@ -55,7 +55,7 @@ int main (int argc, char ** argv)
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s"<<"\t\t"<<"Gflop/s"<<"\t\t seconds"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
uint64_t lmax=96;
uint64_t lmax=64;
#define NLOOP (10*lmax*lmax*lmax*lmax/vol)
for(int lat=8;lat<=lmax;lat+=8){

View File

@ -0,0 +1,803 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./benchmarks/Benchmark_wilson.cc
Copyright (C) 2018
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: paboyle <paboyle@ph.ed.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Grid.h>
using namespace std;
using namespace Grid;
using namespace Grid::QCD;
#include "Grid/util/Profiling.h"
template<class vobj>
void sliceInnerProductMesonField(std::vector< std::vector<ComplexD> > &mat,
const std::vector<Lattice<vobj> > &lhs,
const std::vector<Lattice<vobj> > &rhs,
int orthogdim)
{
typedef typename vobj::scalar_object sobj;
typedef typename vobj::scalar_type scalar_type;
typedef typename vobj::vector_type vector_type;
int Lblock = lhs.size();
int Rblock = rhs.size();
GridBase *grid = lhs[0]._grid;
const int Nd = grid->_ndimension;
const int Nsimd = grid->Nsimd();
int Nt = grid->GlobalDimensions()[orthogdim];
assert(mat.size()==Lblock*Rblock);
for(int t=0;t<mat.size();t++){
assert(mat[t].size()==Nt);
}
int fd=grid->_fdimensions[orthogdim];
int ld=grid->_ldimensions[orthogdim];
int rd=grid->_rdimensions[orthogdim];
// will locally sum vectors first
// sum across these down to scalars
// splitting the SIMD
std::vector<vector_type,alignedAllocator<vector_type> > lvSum(rd*Lblock*Rblock);
parallel_for (int r = 0; r < rd * Lblock * Rblock; r++){
lvSum[r] = zero;
}
std::vector<scalar_type > lsSum(ld*Lblock*Rblock,scalar_type(0.0));
int e1= grid->_slice_nblock[orthogdim];
int e2= grid->_slice_block [orthogdim];
int stride=grid->_slice_stride[orthogdim];
std::cout << GridLogMessage << " Entering first parallel loop "<<std::endl;
// Parallelise over t-direction doesn't expose as much parallelism as needed for KNL
parallel_for(int r=0;r<rd;r++){
int so=r*grid->_ostride[orthogdim]; // base offset for start of plane
for(int n=0;n<e1;n++){
for(int b=0;b<e2;b++){
int ss= so+n*stride+b;
for(int i=0;i<Lblock;i++){
auto left = conjugate(lhs[i]._odata[ss]);
for(int j=0;j<Rblock;j++){
int idx = i+Lblock*j+Lblock*Rblock*r;
auto right = rhs[j]._odata[ss];
vector_type vv = left()(0)(0) * right()(0)(0)
+ left()(0)(1) * right()(0)(1)
+ left()(0)(2) * right()(0)(2)
+ left()(1)(0) * right()(1)(0)
+ left()(1)(1) * right()(1)(1)
+ left()(1)(2) * right()(1)(2)
+ left()(2)(0) * right()(2)(0)
+ left()(2)(1) * right()(2)(1)
+ left()(2)(2) * right()(2)(2)
+ left()(3)(0) * right()(3)(0)
+ left()(3)(1) * right()(3)(1)
+ left()(3)(2) * right()(3)(2);
lvSum[idx]=lvSum[idx]+vv;
}
}
}
}
}
std::cout << GridLogMessage << " Entering second parallel loop "<<std::endl;
// Sum across simd lanes in the plane, breaking out orthog dir.
parallel_for(int rt=0;rt<rd;rt++){
std::vector<int> icoor(Nd);
for(int i=0;i<Lblock;i++){
for(int j=0;j<Rblock;j++){
iScalar<vector_type> temp;
std::vector<iScalar<scalar_type> > extracted(Nsimd);
temp._internal = lvSum[i+Lblock*j+Lblock*Rblock*rt];
extract(temp,extracted);
for(int idx=0;idx<Nsimd;idx++){
grid->iCoorFromIindex(icoor,idx);
int ldx =rt+icoor[orthogdim]*rd;
int ij_dx = i+Lblock*j+Lblock*Rblock*ldx;
lsSum[ij_dx]=lsSum[ij_dx]+extracted[idx]._internal;
}
}}
}
std::cout << GridLogMessage << " Entering non parallel loop "<<std::endl;
for(int t=0;t<fd;t++)
{
int pt = t / ld; // processor plane
int lt = t % ld;
for(int i=0;i<Lblock;i++){
for(int j=0;j<Rblock;j++){
if (pt == grid->_processor_coor[orthogdim]){
int ij_dx = i + Lblock * j + Lblock * Rblock * lt;
mat[i+j*Lblock][t] = lsSum[ij_dx];
}
else{
mat[i+j*Lblock][t] = scalar_type(0.0);
}
}}
}
std::cout << GridLogMessage << " Done "<<std::endl;
// defer sum over nodes.
return;
}
template<class vobj>
void sliceInnerProductMesonFieldGamma(std::vector< std::vector<ComplexD> > &mat,
const std::vector<Lattice<vobj> > &lhs,
const std::vector<Lattice<vobj> > &rhs,
int orthogdim,
std::vector<Gamma::Algebra> gammas)
{
typedef typename vobj::scalar_object sobj;
typedef typename vobj::scalar_type scalar_type;
typedef typename vobj::vector_type vector_type;
int Lblock = lhs.size();
int Rblock = rhs.size();
GridBase *grid = lhs[0]._grid;
const int Nd = grid->_ndimension;
const int Nsimd = grid->Nsimd();
int Nt = grid->GlobalDimensions()[orthogdim];
int Ngamma = gammas.size();
assert(mat.size()==Lblock*Rblock*Ngamma);
for(int t=0;t<mat.size();t++){
assert(mat[t].size()==Nt);
}
int fd=grid->_fdimensions[orthogdim];
int ld=grid->_ldimensions[orthogdim];
int rd=grid->_rdimensions[orthogdim];
// will locally sum vectors first
// sum across these down to scalars
// splitting the SIMD
int MFrvol = rd*Lblock*Rblock*Ngamma;
int MFlvol = ld*Lblock*Rblock*Ngamma;
std::vector<vector_type,alignedAllocator<vector_type> > lvSum(MFrvol);
parallel_for (int r = 0; r < MFrvol; r++){
lvSum[r] = zero;
}
std::vector<scalar_type > lsSum(MFlvol);
parallel_for (int r = 0; r < MFlvol; r++){
lsSum[r]=scalar_type(0.0);
}
int e1= grid->_slice_nblock[orthogdim];
int e2= grid->_slice_block [orthogdim];
int stride=grid->_slice_stride[orthogdim];
std::cout << GridLogMessage << " Entering first parallel loop "<<std::endl;
// Parallelise over t-direction doesn't expose as much parallelism as needed for KNL
parallel_for(int r=0;r<rd;r++){
int so=r*grid->_ostride[orthogdim]; // base offset for start of plane
for(int n=0;n<e1;n++){
for(int b=0;b<e2;b++){
int ss= so+n*stride+b;
for(int i=0;i<Lblock;i++){
auto left = conjugate(lhs[i]._odata[ss]);
for(int j=0;j<Rblock;j++){
for(int mu=0;mu<Ngamma;mu++){
auto right = Gamma(gammas[mu])*rhs[j]._odata[ss];
vector_type vv = left()(0)(0) * right()(0)(0)
+ left()(0)(1) * right()(0)(1)
+ left()(0)(2) * right()(0)(2)
+ left()(1)(0) * right()(1)(0)
+ left()(1)(1) * right()(1)(1)
+ left()(1)(2) * right()(1)(2)
+ left()(2)(0) * right()(2)(0)
+ left()(2)(1) * right()(2)(1)
+ left()(2)(2) * right()(2)(2)
+ left()(3)(0) * right()(3)(0)
+ left()(3)(1) * right()(3)(1)
+ left()(3)(2) * right()(3)(2);
int idx = mu+i*Ngamma+Lblock*Ngamma*j+Ngamma*Lblock*Rblock*r;
lvSum[idx]=lvSum[idx]+vv;
}
}
}
}
}
}
std::cout << GridLogMessage << " Entering second parallel loop "<<std::endl;
// Sum across simd lanes in the plane, breaking out orthog dir.
parallel_for(int rt=0;rt<rd;rt++){
iScalar<vector_type> temp;
std::vector<int> icoor(Nd);
std::vector<iScalar<scalar_type> > extracted(Nsimd);
for(int i=0;i<Lblock;i++){
for(int j=0;j<Rblock;j++){
for(int mu=0;mu<Ngamma;mu++){
int ij_rdx = mu+i*Ngamma+Ngamma*Lblock*j+Ngamma*Lblock*Rblock*rt;
temp._internal = lvSum[ij_rdx];
extract(temp,extracted);
for(int idx=0;idx<Nsimd;idx++){
grid->iCoorFromIindex(icoor,idx);
int ldx =rt+icoor[orthogdim]*rd;
int ij_ldx = mu+i*Ngamma+Ngamma*Lblock*j+Ngamma*Lblock*Rblock*ldx;
lsSum[ij_ldx]=lsSum[ij_ldx]+extracted[idx]._internal;
}
}}}
}
std::cout << GridLogMessage << " Entering non parallel loop "<<std::endl;
for(int t=0;t<fd;t++)
{
int pt = t / ld; // processor plane
int lt = t % ld;
for(int i=0;i<Lblock;i++){
for(int j=0;j<Rblock;j++){
for(int mu=0;mu<Ngamma;mu++){
if (pt == grid->_processor_coor[orthogdim]){
int ij_dx = mu+i*Ngamma+Ngamma*Lblock*j+Ngamma*Lblock*Rblock* lt;
mat[mu+i*Ngamma+j*Lblock*Ngamma][t] = lsSum[ij_dx];
}
else{
mat[mu+i*Ngamma+j*Lblock*Ngamma][t] = scalar_type(0.0);
}
}}}
}
std::cout << GridLogMessage << " Done "<<std::endl;
// defer sum over nodes.
return;
}
template<class vobj>
void sliceInnerProductMesonFieldGamma1(std::vector< std::vector<ComplexD> > &mat,
const std::vector<Lattice<vobj> > &lhs,
const std::vector<Lattice<vobj> > &rhs,
int orthogdim,
std::vector<Gamma::Algebra> gammas)
{
typedef typename vobj::scalar_object sobj;
typedef typename vobj::scalar_type scalar_type;
typedef typename vobj::vector_type vector_type;
typedef iSpinMatrix<vector_type> SpinMatrix_v;
typedef iSpinMatrix<scalar_type> SpinMatrix_s;
int Lblock = lhs.size();
int Rblock = rhs.size();
GridBase *grid = lhs[0]._grid;
const int Nd = grid->_ndimension;
const int Nsimd = grid->Nsimd();
int Nt = grid->GlobalDimensions()[orthogdim];
int Ngamma = gammas.size();
assert(mat.size()==Lblock*Rblock*Ngamma);
for(int t=0;t<mat.size();t++){
assert(mat[t].size()==Nt);
}
int fd=grid->_fdimensions[orthogdim];
int ld=grid->_ldimensions[orthogdim];
int rd=grid->_rdimensions[orthogdim];
// will locally sum vectors first
// sum across these down to scalars
// splitting the SIMD
int MFrvol = rd*Lblock*Rblock;
int MFlvol = ld*Lblock*Rblock;
Vector<SpinMatrix_v > lvSum(MFrvol);
parallel_for (int r = 0; r < MFrvol; r++){
lvSum[r] = zero;
}
Vector<SpinMatrix_s > lsSum(MFlvol);
parallel_for (int r = 0; r < MFlvol; r++){
lsSum[r]=scalar_type(0.0);
}
int e1= grid->_slice_nblock[orthogdim];
int e2= grid->_slice_block [orthogdim];
int stride=grid->_slice_stride[orthogdim];
std::cout << GridLogMessage << " Entering first parallel loop "<<std::endl;
// Parallelise over t-direction doesn't expose as much parallelism as needed for KNL
parallel_for(int r=0;r<rd;r++){
int so=r*grid->_ostride[orthogdim]; // base offset for start of plane
for(int n=0;n<e1;n++){
for(int b=0;b<e2;b++){
int ss= so+n*stride+b;
for(int i=0;i<Lblock;i++){
auto left = conjugate(lhs[i]._odata[ss]);
for(int j=0;j<Rblock;j++){
SpinMatrix_v vv;
auto right = rhs[j]._odata[ss];
for(int s1=0;s1<Ns;s1++){
for(int s2=0;s2<Ns;s2++){
vv()(s2,s1)() = left()(s1)(0) * right()(s2)(0)
+ left()(s1)(1) * right()(s2)(1)
+ left()(s1)(2) * right()(s2)(2);
}}
int idx = i+Lblock*j+Lblock*Rblock*r;
lvSum[idx]=lvSum[idx]+vv;
}
}
}
}
}
std::cout << GridLogMessage << " Entering second parallel loop "<<std::endl;
// Sum across simd lanes in the plane, breaking out orthog dir.
parallel_for(int rt=0;rt<rd;rt++){
std::vector<int> icoor(Nd);
std::vector<SpinMatrix_s> extracted(Nsimd);
for(int i=0;i<Lblock;i++){
for(int j=0;j<Rblock;j++){
int ij_rdx = i+Lblock*j+Lblock*Rblock*rt;
extract(lvSum[ij_rdx],extracted);
for(int idx=0;idx<Nsimd;idx++){
grid->iCoorFromIindex(icoor,idx);
int ldx = rt+icoor[orthogdim]*rd;
int ij_ldx = i+Lblock*j+Lblock*Rblock*ldx;
lsSum[ij_ldx]=lsSum[ij_ldx]+extracted[idx];
}
}}
}
std::cout << GridLogMessage << " Entering third parallel loop "<<std::endl;
parallel_for(int t=0;t<fd;t++)
{
int pt = t / ld; // processor plane
int lt = t % ld;
for(int i=0;i<Lblock;i++){
for(int j=0;j<Rblock;j++){
if (pt == grid->_processor_coor[orthogdim]){
int ij_dx = i + Lblock * j + Lblock * Rblock * lt;
for(int mu=0;mu<Ngamma;mu++){
mat[mu+i*Ngamma+j*Lblock*Ngamma][t] = trace(lsSum[ij_dx]*Gamma(gammas[mu]));
}
}
else{
for(int mu=0;mu<Ngamma;mu++){
mat[mu+i*Ngamma+j*Lblock*Ngamma][t] = scalar_type(0.0);
}
}
}}
}
std::cout << GridLogMessage << " Done "<<std::endl;
// defer sum over nodes.
return;
}
template<class vobj>
void sliceInnerProductMesonFieldGammaMom(std::vector< std::vector<ComplexD> > &mat,
const std::vector<Lattice<vobj> > &lhs,
const std::vector<Lattice<vobj> > &rhs,
int orthogdim,
std::vector<Gamma::Algebra> gammas,
const std::vector<LatticeComplex > &mom)
{
typedef typename vobj::scalar_object sobj;
typedef typename vobj::scalar_type scalar_type;
typedef typename vobj::vector_type vector_type;
typedef iSpinMatrix<vector_type> SpinMatrix_v;
typedef iSpinMatrix<scalar_type> SpinMatrix_s;
int Lblock = lhs.size();
int Rblock = rhs.size();
GridBase *grid = lhs[0]._grid;
const int Nd = grid->_ndimension;
const int Nsimd = grid->Nsimd();
int Nt = grid->GlobalDimensions()[orthogdim];
int Ngamma = gammas.size();
int Nmom = mom.size();
assert(mat.size()==Lblock*Rblock*Ngamma*Nmom);
for(int t=0;t<mat.size();t++){
assert(mat[t].size()==Nt);
}
int fd=grid->_fdimensions[orthogdim];
int ld=grid->_ldimensions[orthogdim];
int rd=grid->_rdimensions[orthogdim];
// will locally sum vectors first
// sum across these down to scalars
// splitting the SIMD
int MFrvol = rd*Lblock*Rblock*Nmom;
int MFlvol = ld*Lblock*Rblock*Nmom;
Vector<SpinMatrix_v > lvSum(MFrvol);
parallel_for (int r = 0; r < MFrvol; r++){
lvSum[r] = zero;
}
Vector<SpinMatrix_s > lsSum(MFlvol);
parallel_for (int r = 0; r < MFlvol; r++){
lsSum[r]=scalar_type(0.0);
}
int e1= grid->_slice_nblock[orthogdim];
int e2= grid->_slice_block [orthogdim];
int stride=grid->_slice_stride[orthogdim];
std::cout << GridLogMessage << " Entering first parallel loop "<<std::endl;
// Parallelise over t-direction doesn't expose as much parallelism as needed for KNL
parallel_for(int r=0;r<rd;r++){
int so=r*grid->_ostride[orthogdim]; // base offset for start of plane
for(int n=0;n<e1;n++){
for(int b=0;b<e2;b++){
int ss= so+n*stride+b;
for(int i=0;i<Lblock;i++){
auto left = conjugate(lhs[i]._odata[ss]);
for(int j=0;j<Rblock;j++){
SpinMatrix_v vv;
auto right = rhs[j]._odata[ss];
for(int s1=0;s1<Ns;s1++){
for(int s2=0;s2<Ns;s2++){
vv()(s1,s2)() = left()(s1)(0) * right()(s2)(0)
+ left()(s1)(1) * right()(s2)(1)
+ left()(s1)(2) * right()(s2)(2);
}}
// After getting the sitewise product do the mom phase loop
int base = Nmom*i+Nmom*Lblock*j+Nmom*Lblock*Rblock*r;
// Trigger unroll
for ( int m=0;m<Nmom;m++){
int idx = m+base;
auto phase = mom[m]._odata[ss];
mac(&lvSum[idx],&vv,&phase);
}
}
}
}
}
}
std::cout << GridLogMessage << " Entering second parallel loop "<<std::endl;
// Sum across simd lanes in the plane, breaking out orthog dir.
parallel_for(int rt=0;rt<rd;rt++){
std::vector<int> icoor(Nd);
std::vector<SpinMatrix_s> extracted(Nsimd);
for(int i=0;i<Lblock;i++){
for(int j=0;j<Rblock;j++){
for(int m=0;m<Nmom;m++){
int ij_rdx = m+Nmom*i+Nmom*Lblock*j+Nmom*Lblock*Rblock*rt;
extract(lvSum[ij_rdx],extracted);
for(int idx=0;idx<Nsimd;idx++){
grid->iCoorFromIindex(icoor,idx);
int ldx = rt+icoor[orthogdim]*rd;
int ij_ldx = m+Nmom*i+Nmom*Lblock*j+Nmom*Lblock*Rblock*ldx;
lsSum[ij_ldx]=lsSum[ij_ldx]+extracted[idx];
}
}}}
}
std::cout << GridLogMessage << " Entering third parallel loop "<<std::endl;
parallel_for(int t=0;t<fd;t++)
{
int pt = t / ld; // processor plane
int lt = t % ld;
for(int i=0;i<Lblock;i++){
for(int j=0;j<Rblock;j++){
if (pt == grid->_processor_coor[orthogdim]){
for(int m=0;m<Nmom;m++){
int ij_dx = m+Nmom*i + Nmom*Lblock * j + Nmom*Lblock * Rblock * lt;
for(int mu=0;mu<Ngamma;mu++){
mat[ mu
+m*Ngamma
+i*Nmom*Ngamma
+j*Nmom*Ngamma*Lblock][t] = trace(lsSum[ij_dx]*Gamma(gammas[mu]));
}
}
}
else{
for(int mu=0;mu<Ngamma;mu++){
for(int m=0;m<Nmom;m++){
mat[mu+m*Ngamma+i*Nmom*Ngamma+j*Nmom*Lblock*Ngamma][t] = scalar_type(0.0);
}}
}
}}
}
std::cout << GridLogMessage << " Done "<<std::endl;
// defer sum over nodes.
return;
}
/*
template void sliceInnerProductMesonField<SpinColourVector>(std::vector< std::vector<ComplexD> > &mat,
const std::vector<Lattice<SpinColourVector> > &lhs,
const std::vector<Lattice<SpinColourVector> > &rhs,
int orthogdim) ;
*/
std::vector<Gamma::Algebra> Gmu4 ( {
Gamma::Algebra::GammaX,
Gamma::Algebra::GammaY,
Gamma::Algebra::GammaZ,
Gamma::Algebra::GammaT });
std::vector<Gamma::Algebra> Gmu16 ( {
Gamma::Algebra::Gamma5,
Gamma::Algebra::GammaT,
Gamma::Algebra::GammaTGamma5,
Gamma::Algebra::GammaX,
Gamma::Algebra::GammaXGamma5,
Gamma::Algebra::GammaY,
Gamma::Algebra::GammaYGamma5,
Gamma::Algebra::GammaZ,
Gamma::Algebra::GammaZGamma5,
Gamma::Algebra::Identity,
Gamma::Algebra::SigmaXT,
Gamma::Algebra::SigmaXY,
Gamma::Algebra::SigmaXZ,
Gamma::Algebra::SigmaYT,
Gamma::Algebra::SigmaYZ,
Gamma::Algebra::SigmaZT
});
int main (int argc, char ** argv)
{
Grid_init(&argc,&argv);
std::vector<int> latt_size = GridDefaultLatt();
std::vector<int> simd_layout = GridDefaultSimd(Nd,vComplex::Nsimd());
std::vector<int> mpi_layout = GridDefaultMpi();
GridCartesian Grid(latt_size,simd_layout,mpi_layout);
const int Nmom=7;
int nt = latt_size[Tp];
uint64_t vol = 1;
for(int d=0;d<Nd;d++){
vol = vol*latt_size[d];
}
std::vector<int> seeds({1,2,3,4});
GridParallelRNG pRNG(&Grid);
pRNG.SeedFixedIntegers(seeds);
int Nm = atoi(argv[1]); // number of all modes (high + low)
std::vector<LatticeFermion> v(Nm,&Grid);
std::vector<LatticeFermion> w(Nm,&Grid);
std::vector<LatticeFermion> gammaV(Nm,&Grid);
std::vector<LatticeComplex> phases(Nmom,&Grid);
for(int i=0;i<Nm;i++) {
random(pRNG,v[i]);
random(pRNG,w[i]);
}
for(int i=0;i<Nmom;i++) {
phases[i] = Complex(1.0);
}
double flops = vol * (11.0 * 8.0 + 6.0) * Nm*Nm;
double byte = vol * (12.0 * sizeof(Complex) ) * Nm*Nm;
std::vector<ComplexD> ip(nt);
std::vector<std::vector<ComplexD> > MesonFields (Nm*Nm);
std::vector<std::vector<ComplexD> > MesonFields4 (Nm*Nm*4);
std::vector<std::vector<ComplexD> > MesonFields16 (Nm*Nm*16);
std::vector<std::vector<ComplexD> > MesonFields161(Nm*Nm*16);
std::vector<std::vector<ComplexD> > MesonFields16mom (Nm*Nm*16*Nmom);
std::vector<std::vector<ComplexD> > MesonFieldsRef(Nm*Nm);
for(int i=0;i<MesonFields.size();i++ ) MesonFields [i].resize(nt);
for(int i=0;i<MesonFieldsRef.size();i++) MesonFieldsRef[i].resize(nt);
for(int i=0;i<MesonFields4.size();i++ ) MesonFields4 [i].resize(nt);
for(int i=0;i<MesonFields16.size();i++ ) MesonFields16 [i].resize(nt);
for(int i=0;i<MesonFields161.size();i++ ) MesonFields161[i].resize(nt);
for(int i=0;i<MesonFields16mom.size();i++ ) MesonFields16mom [i].resize(nt);
GridLogMessage.TimingMode(1);
std::cout<<GridLogMessage << "Running loop with sliceInnerProductVector"<<std::endl;
double t0 = usecond();
for(int i=0;i<Nm;i++) {
for(int j=0;j<Nm;j++) {
sliceInnerProductVector(ip, w[i],v[j],Tp);
for(int t=0;t<nt;t++){
MesonFieldsRef[i+j*Nm][t] = ip[t];
}
}}
double t1 = usecond();
std::cout<<GridLogMessage << "Done "<< (t1-t0) <<" usecond " <<std::endl;
std::cout<<GridLogMessage << "Done "<< flops/(t1-t0) <<" mflops " <<std::endl;
std::cout<<GridLogMessage << "Done "<< byte /(t1-t0) <<" MB/s " <<std::endl;
std::cout<<GridLogMessage << "Running loop with new code for Nt="<<nt<<std::endl;
t0 = usecond();
sliceInnerProductMesonField(MesonFields,w,v,Tp);
t1 = usecond();
std::cout<<GridLogMessage << "Done "<< (t1-t0) <<" usecond " <<std::endl;
std::cout<<GridLogMessage << "Done "<< flops/(t1-t0) <<" mflops " <<std::endl;
std::cout<<GridLogMessage << "Done "<< byte /(t1-t0) <<" MB/s " <<std::endl;
std::cout<<GridLogMessage << "Running loop with Four gammas code for Nt="<<nt<<std::endl;
flops = vol * (11.0 * 8.0 + 6.0) * Nm*Nm*4;
byte = vol * (12.0 * sizeof(Complex) ) * Nm*Nm
+ vol * ( 2.0 * sizeof(Complex) ) * Nm*Nm* 4;
t0 = usecond();
sliceInnerProductMesonFieldGamma(MesonFields4,w,v,Tp,Gmu4);
t1 = usecond();
std::cout<<GridLogMessage << "Done "<< (t1-t0) <<" usecond " <<std::endl;
std::cout<<GridLogMessage << "Done "<< flops/(t1-t0) <<" mflops " <<std::endl;
std::cout<<GridLogMessage << "Done "<< byte /(t1-t0) <<" MB/s " <<std::endl;
std::cout<<GridLogMessage << "Running loop with Sixteen gammas code for Nt="<<nt<<std::endl;
flops = vol * (11.0 * 8.0 + 6.0) * Nm*Nm*16;
byte = vol * (12.0 * sizeof(Complex) ) * Nm*Nm
+ vol * ( 2.0 * sizeof(Complex) ) * Nm*Nm* 16;
t0 = usecond();
sliceInnerProductMesonFieldGamma(MesonFields16,w,v,Tp,Gmu16);
t1 = usecond();
std::cout<<GridLogMessage << "Done "<< (t1-t0) <<" usecond " <<std::endl;
std::cout<<GridLogMessage << "Done "<< flops/(t1-t0) <<" mflops " <<std::endl;
std::cout<<GridLogMessage << "Done "<< byte /(t1-t0) <<" MB/s " <<std::endl;
std::cout<<GridLogMessage << "Running loop with Sixteen gammas code1 for Nt="<<nt<<std::endl;
flops = vol * ( 2 * 8.0 + 6.0) * Nm*Nm*16;
byte = vol * (12.0 * sizeof(Complex) ) * Nm*Nm
+ vol * ( 2.0 * sizeof(Complex) ) * Nm*Nm* 16;
t0 = usecond();
sliceInnerProductMesonFieldGamma1(MesonFields161, w, v, Tp, Gmu16);
t1 = usecond();
std::cout<<GridLogMessage << "Done "<< (t1-t0) <<" usecond " <<std::endl;
std::cout<<GridLogMessage << "Done "<< flops/(t1-t0) <<" mflops " <<std::endl;
std::cout<<GridLogMessage << "Done "<< byte /(t1-t0) <<" MB/s " <<std::endl;
std::cout<<GridLogMessage << "Running loop with Sixteen gammas "<<Nmom<<" momenta "<<std::endl;
flops = vol * ( 2 * 8.0 + 6.0 + 8.0*Nmom) * Nm*Nm*16;
byte = vol * (12.0 * sizeof(Complex) ) * Nm*Nm
+ vol * ( 2.0 * sizeof(Complex) *Nmom ) * Nm*Nm* 16;
t0 = usecond();
sliceInnerProductMesonFieldGammaMom(MesonFields16mom,w,v,Tp,Gmu16,phases);
t1 = usecond();
std::cout<<GridLogMessage << "Done "<< (t1-t0) <<" usecond " <<std::endl;
std::cout<<GridLogMessage << "Done "<< flops/(t1-t0) <<" mflops " <<std::endl;
std::cout<<GridLogMessage << "Done "<< byte /(t1-t0) <<" MB/s " <<std::endl;
RealD err = 0;
RealD err2 = 0;
ComplexD diff;
ComplexD diff2;
for(int i=0;i<Nm;i++) {
for(int j=0;j<Nm;j++) {
for(int t=0;t<nt;t++){
diff = MesonFields[i+Nm*j][t] - MesonFieldsRef[i+Nm*j][t];
err += real(diff*conj(diff));
}
}}
std::cout<<GridLogMessage << "Norm error "<< err <<std::endl;
err = err*0.;
diff = diff*0.;
for (int mu = 0; mu < 16; mu++){
for (int k = 0; k < gammaV.size(); k++){
gammaV[k] = Gamma(Gmu16[mu]) * v[k];
}
for (int i = 0; i < Nm; i++){
for (int j = 0; j < Nm; j++){
sliceInnerProductVector(ip, w[i], gammaV[j], Tp);
for (int t = 0; t < nt; t++){
MesonFields[i + j * Nm][t] = ip[t];
diff = MesonFields16[mu+i*16+Nm*16*j][t] - MesonFields161[mu+i*16+Nm*16*j][t];
diff2 = MesonFields[i+j*Nm][t] - MesonFields161[mu+i*16+Nm*16*j][t];
err += real(diff*conj(diff));
err2 += real(diff2*conj(diff2));
}
}
}
}
std::cout << GridLogMessage << "Norm error 16 gamma1/16 gamma naive " << err << std::endl;
std::cout << GridLogMessage << "Norm error 16 gamma1/sliceInnerProduct " << err2 << std::endl;
Grid_finalize();
}

View File

@ -35,9 +35,11 @@ using namespace Grid::QCD;
int main (int argc, char ** argv)
{
Grid_init(&argc,&argv);
#define LMAX (64)
#define LMAX (32)
#define LMIN (16)
#define LINC (4)
int64_t Nloop=20;
int64_t Nloop=2000;
std::vector<int> simd_layout = GridDefaultSimd(Nd,vComplex::Nsimd());
std::vector<int> mpi_layout = GridDefaultMpi();
@ -51,7 +53,7 @@ int main (int argc, char ** argv)
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GFlop/s"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
for(int lat=2;lat<=LMAX;lat+=2){
for(int lat=LMIN;lat<=LMAX;lat+=LINC){
std::vector<int> latt_size ({lat*mpi_layout[0],lat*mpi_layout[1],lat*mpi_layout[2],lat*mpi_layout[3]});
int64_t vol = latt_size[0]*latt_size[1]*latt_size[2]*latt_size[3];
@ -83,7 +85,7 @@ int main (int argc, char ** argv)
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GFlop/s"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
for(int lat=2;lat<=LMAX;lat+=2){
for(int lat=LMIN;lat<=LMAX;lat+=LINC){
std::vector<int> latt_size ({lat*mpi_layout[0],lat*mpi_layout[1],lat*mpi_layout[2],lat*mpi_layout[3]});
int64_t vol = latt_size[0]*latt_size[1]*latt_size[2]*latt_size[3];
@ -114,7 +116,7 @@ int main (int argc, char ** argv)
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GFlop/s"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
for(int lat=2;lat<=LMAX;lat+=2){
for(int lat=LMIN;lat<=LMAX;lat+=LINC){
std::vector<int> latt_size ({lat*mpi_layout[0],lat*mpi_layout[1],lat*mpi_layout[2],lat*mpi_layout[3]});
int64_t vol = latt_size[0]*latt_size[1]*latt_size[2]*latt_size[3];
@ -145,7 +147,38 @@ int main (int argc, char ** argv)
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GFlop/s"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
for(int lat=2;lat<=LMAX;lat+=2){
for(int lat=LMIN;lat<=LMAX;lat+=LINC){
std::vector<int> latt_size ({lat*mpi_layout[0],lat*mpi_layout[1],lat*mpi_layout[2],lat*mpi_layout[3]});
int64_t vol = latt_size[0]*latt_size[1]*latt_size[2]*latt_size[3];
GridCartesian Grid(latt_size,simd_layout,mpi_layout);
GridParallelRNG pRNG(&Grid); pRNG.SeedFixedIntegers(std::vector<int>({45,12,81,9}));
LatticeColourMatrix z(&Grid); random(pRNG,z);
LatticeColourMatrix x(&Grid); random(pRNG,x);
LatticeColourMatrix y(&Grid); random(pRNG,y);
double start=usecond();
for(int64_t i=0;i<Nloop;i++){
mac(z,x,y);
}
double stop=usecond();
double time = (stop-start)/Nloop*1000.0;
double bytes=3*vol*Nc*Nc*sizeof(Complex);
double flops=Nc*Nc*(6+8+8)*vol;
std::cout<<GridLogMessage<<std::setprecision(3) << lat<<"\t\t"<<bytes<<" \t\t"<<bytes/time<<"\t\t" << flops/time<<std::endl;
}
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << "= Benchmarking SU3xSU3 CovShiftForward(z,x,y)"<<std::endl;
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GFlop/s"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
for(int lat=LMIN;lat<=LMAX;lat+=LINC){
std::vector<int> latt_size ({lat*mpi_layout[0],lat*mpi_layout[1],lat*mpi_layout[2],lat*mpi_layout[3]});
int64_t vol = latt_size[0]*latt_size[1]*latt_size[2]*latt_size[3];
@ -157,18 +190,64 @@ int main (int argc, char ** argv)
LatticeColourMatrix x(&Grid); random(pRNG,x);
LatticeColourMatrix y(&Grid); random(pRNG,y);
double start=usecond();
for(int64_t i=0;i<Nloop;i++){
mac(z,x,y);
for(int mu=0;mu<4;mu++){
double start=usecond();
for(int64_t i=0;i<Nloop;i++){
z = PeriodicBC::CovShiftForward(x,mu,y);
}
double stop=usecond();
double time = (stop-start)/Nloop*1000.0;
double bytes=3*vol*Nc*Nc*sizeof(Complex);
double flops=Nc*Nc*(6+8+8)*vol;
std::cout<<GridLogMessage<<std::setprecision(3) << lat<<"\t\t"<<bytes<<" \t\t"<<bytes/time<<"\t\t" << flops/time<<std::endl;
}
double stop=usecond();
double time = (stop-start)/Nloop*1000.0;
double bytes=3*vol*Nc*Nc*sizeof(Complex);
double flops=Nc*Nc*(8+8+8)*vol;
std::cout<<GridLogMessage<<std::setprecision(3) << lat<<"\t\t"<<bytes<<" \t\t"<<bytes/time<<"\t\t" << flops/time<<std::endl;
}
#if 1
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << "= Benchmarking SU3xSU3 z= x * Cshift(y)"<<std::endl;
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GFlop/s"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
for(int lat=LMIN;lat<=LMAX;lat+=LINC){
std::vector<int> latt_size ({lat*mpi_layout[0],lat*mpi_layout[1],lat*mpi_layout[2],lat*mpi_layout[3]});
int64_t vol = latt_size[0]*latt_size[1]*latt_size[2]*latt_size[3];
GridCartesian Grid(latt_size,simd_layout,mpi_layout);
GridParallelRNG pRNG(&Grid); pRNG.SeedFixedIntegers(std::vector<int>({45,12,81,9}));
LatticeColourMatrix z(&Grid); random(pRNG,z);
LatticeColourMatrix x(&Grid); random(pRNG,x);
LatticeColourMatrix y(&Grid); random(pRNG,y);
LatticeColourMatrix tmp(&Grid);
for(int mu=0;mu<4;mu++){
double tshift=0;
double tmult =0;
double start=usecond();
for(int64_t i=0;i<Nloop;i++){
tshift-=usecond();
tmp = Cshift(y,mu,-1);
tshift+=usecond();
tmult-=usecond();
z = x*tmp;
tmult+=usecond();
}
double stop=usecond();
double time = (stop-start)/Nloop;
tshift = tshift/Nloop;
tmult = tmult /Nloop;
double bytes=3*vol*Nc*Nc*sizeof(Complex);
double flops=Nc*Nc*(6+8+8)*vol;
std::cout<<GridLogMessage<<std::setprecision(3) << "total us "<<time<<" shift "<<tshift <<" mult "<<tmult<<std::endl;
time = time * 1000; // convert to NS for GB/s
std::cout<<GridLogMessage<<std::setprecision(3) << lat<<"\t\t"<<bytes<<" \t\t"<<bytes/time<<"\t\t" << flops/time<<std::endl;
}
}
#endif
Grid_finalize();
}

View File

@ -4,7 +4,7 @@
Source file: ./benchmarks/Benchmark_wilson.cc
Copyright (C) 2015
Copyright (C) 2018
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: paboyle <paboyle@ph.ed.ac.uk>
@ -32,6 +32,9 @@ using namespace std;
using namespace Grid;
using namespace Grid::QCD;
#include "Grid/util/Profiling.h"
template<class d>
struct scal {
d internal;
@ -45,6 +48,7 @@ struct scal {
};
bool overlapComms = false;
bool perfProfiling = false;
int main (int argc, char ** argv)
{
@ -53,6 +57,12 @@ int main (int argc, char ** argv)
if( GridCmdOptionExists(argv,argv+argc,"--asynch") ){
overlapComms = true;
}
if( GridCmdOptionExists(argv,argv+argc,"--perf") ){
perfProfiling = true;
}
long unsigned int single_site_flops = 8*QCD::Nc*(7+16*QCD::Nc);
std::vector<int> latt_size = GridDefaultLatt();
std::vector<int> simd_layout = GridDefaultSimd(Nd,vComplex::Nsimd());
@ -61,10 +71,15 @@ int main (int argc, char ** argv)
GridRedBlackCartesian RBGrid(&Grid);
int threads = GridThread::GetThreads();
std::cout<<GridLogMessage << "Grid is setup to use "<<threads<<" threads"<<std::endl;
GridLogLayout();
std::cout<<GridLogMessage << "Grid floating point word size is REALF"<< sizeof(RealF)<<std::endl;
std::cout<<GridLogMessage << "Grid floating point word size is REALD"<< sizeof(RealD)<<std::endl;
std::cout<<GridLogMessage << "Grid floating point word size is REAL"<< sizeof(Real)<<std::endl;
std::cout<<GridLogMessage << "Grid number of colours : "<< QCD::Nc <<std::endl;
std::cout<<GridLogMessage << "Benchmarking Wilson operator in the fundamental representation" << std::endl;
std::vector<int> seeds({1,2,3,4});
GridParallelRNG pRNG(&Grid);
@ -134,9 +149,25 @@ int main (int argc, char ** argv)
Dw.Dhop(src,result,0);
}
double t1=usecond();
double flops=1344*volume*ncall;
double flops=single_site_flops*volume*ncall;
if (perfProfiling){
std::cout<<GridLogMessage << "Profiling Dw with perf"<<std::endl;
System::profile("kernel", [&]() {
for(int i=0;i<ncall;i++){
Dw.Dhop(src,result,0);
}
});
std::cout<<GridLogMessage << "Generated kernel.data"<<std::endl;
std::cout<<GridLogMessage << "Use with: perf report -i kernel.data"<<std::endl;
}
std::cout<<GridLogMessage << "Called Dw"<<std::endl;
std::cout<<GridLogMessage << "flops per site " << single_site_flops << std::endl;
std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl;
std::cout<<GridLogMessage << "norm ref "<< norm2(ref)<<std::endl;
std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl;

View File

@ -62,6 +62,7 @@ int main (int argc, char ** argv)
std::cout << GridLogMessage<< "* Kernel options --dslash-generic, --dslash-unroll, --dslash-asm" <<std::endl;
std::cout << GridLogMessage<< "*****************************************************************" <<std::endl;
std::cout << GridLogMessage<< "*****************************************************************" <<std::endl;
std::cout << GridLogMessage<< "* Number of colours "<< QCD::Nc <<std::endl;
std::cout << GridLogMessage<< "* Benchmarking WilsonFermionR::Dhop "<<std::endl;
std::cout << GridLogMessage<< "* Vectorising space-time by "<<vComplex::Nsimd()<<std::endl;
if ( sizeof(Real)==4 ) std::cout << GridLogMessage<< "* SINGLE precision "<<std::endl;
@ -69,13 +70,15 @@ int main (int argc, char ** argv)
if ( WilsonKernelsStatic::Opt == WilsonKernelsStatic::OptGeneric ) std::cout << GridLogMessage<< "* Using GENERIC Nc WilsonKernels" <<std::endl;
if ( WilsonKernelsStatic::Opt == WilsonKernelsStatic::OptHandUnroll) std::cout << GridLogMessage<< "* Using Nc=3 WilsonKernels" <<std::endl;
if ( WilsonKernelsStatic::Opt == WilsonKernelsStatic::OptInlineAsm ) std::cout << GridLogMessage<< "* Using Asm Nc=3 WilsonKernels" <<std::endl;
std::cout << GridLogMessage << "* OpenMP threads : "<< GridThread::GetThreads() <<std::endl;
std::cout << GridLogMessage << "* MPI tasks : "<< GridCmdVectorIntToString(mpi_layout) << std::endl;
std::cout << GridLogMessage<< "*****************************************************************" <<std::endl;
std::cout<<GridLogMessage << "============================================================================="<< std::endl;
std::cout<<GridLogMessage << "= Benchmarking Wilson" << std::endl;
std::cout<<GridLogMessage << "============================================================================="<< std::endl;
std::cout<<GridLogMessage << "Volume\t\t\tWilson/MFLOPs\tWilsonDag/MFLOPs" << std::endl;
std::cout<<GridLogMessage << "============================================================================="<< std::endl;
std::cout<<GridLogMessage << "================================================================================================="<< std::endl;
std::cout<<GridLogMessage << "= Benchmarking Wilson operator in the fundamental representation" << std::endl;
std::cout<<GridLogMessage << "================================================================================================="<< std::endl;
std::cout<<GridLogMessage << "Volume\t\t\tWilson/MFLOPs\tWilsonDag/MFLOPs\tWilsonEO/MFLOPs\tWilsonDagEO/MFLOPs" << std::endl;
std::cout<<GridLogMessage << "================================================================================================="<< std::endl;
int Lmax = 32;
int dmin = 0;
@ -97,13 +100,20 @@ int main (int argc, char ** argv)
GridParallelRNG pRNG(&Grid); pRNG.SeedFixedIntegers(seeds);
LatticeGaugeField Umu(&Grid); random(pRNG,Umu);
LatticeFermion src(&Grid); random(pRNG,src);
LatticeFermion result(&Grid); result=zero;
LatticeFermion src(&Grid); random(pRNG,src);
LatticeFermion src_o(&RBGrid); pickCheckerboard(Odd,src_o,src);
LatticeFermion result(&Grid); result=zero;
LatticeFermion result_e(&RBGrid); result_e=zero;
double volume = std::accumulate(latt_size.begin(),latt_size.end(),1,std::multiplies<int>());
WilsonFermionR Dw(Umu,Grid,RBGrid,mass,params);
// Full operator
bench_wilson(src,result,Dw,volume,DaggerNo);
bench_wilson(src,result,Dw,volume,DaggerYes);
std::cout << "\t";
// EO
bench_wilson(src,result,Dw,volume,DaggerNo);
bench_wilson(src,result,Dw,volume,DaggerYes);
std::cout << std::endl;
@ -122,9 +132,26 @@ void bench_wilson (
int const dag )
{
int ncall = 1000;
long unsigned int single_site_flops = 8*QCD::Nc*(7+16*QCD::Nc);
double t0 = usecond();
for(int i=0; i<ncall; i++) { Dw.Dhop(src,result,dag); }
double t1 = usecond();
double flops = 1344 * volume * ncall;
double flops = single_site_flops * volume * ncall;
std::cout << flops/(t1-t0) << "\t\t";
}
void bench_wilson_eo (
LatticeFermion & src,
LatticeFermion & result,
WilsonFermionR & Dw,
double const volume,
int const dag )
{
int ncall = 1000;
long unsigned int single_site_flops = 8*QCD::Nc*(7+16*QCD::Nc);
double t0 = usecond();
for(int i=0; i<ncall; i++) { Dw.DhopEO(src,result,dag); }
double t1 = usecond();
double flops = (single_site_flops * volume * ncall)/2.0;
std::cout << flops/(t1-t0) << "\t\t";
}

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash
EIGEN_URL='http://bitbucket.org/eigen/eigen/get/3.3.3.tar.bz2'
EIGEN_URL='http://bitbucket.org/eigen/eigen/get/3.3.5.tar.bz2'
echo "-- deploying Eigen source..."
wget ${EIGEN_URL} --no-check-certificate && ./scripts/update_eigen.sh `basename ${EIGEN_URL}` && rm `basename ${EIGEN_URL}`

View File

@ -249,6 +249,9 @@ case ${ax_cv_cxx_compiler_vendor} in
AVX512)
AC_DEFINE([AVX512],[1],[AVX512 intrinsics])
SIMD_FLAGS='-mavx512f -mavx512pf -mavx512er -mavx512cd';;
SKL)
AC_DEFINE([AVX512],[1],[AVX512 intrinsics for SkyLake Xeon])
SIMD_FLAGS='-march=skylake-avx512';;
KNC)
AC_DEFINE([IMCI],[1],[IMCI intrinsics for Knights Corner])
SIMD_FLAGS='';;
@ -337,7 +340,7 @@ case ${ac_PRECISION} in
esac
###################### Shared memory allocation technique under MPI3
AC_ARG_ENABLE([shm],[AC_HELP_STRING([--enable-shm=shmopen|hugetlbfs],
AC_ARG_ENABLE([shm],[AC_HELP_STRING([--enable-shm=shmopen|shmget|hugetlbfs|shmnone],
[Select SHM allocation technique])],[ac_SHM=${enable_shm}],[ac_SHM=shmopen])
case ${ac_SHM} in
@ -346,6 +349,14 @@ case ${ac_SHM} in
AC_DEFINE([GRID_MPI3_SHMOPEN],[1],[GRID_MPI3_SHMOPEN] )
;;
shmget)
AC_DEFINE([GRID_MPI3_SHMGET],[1],[GRID_MPI3_SHMGET] )
;;
shmnone)
AC_DEFINE([GRID_MPI3_SHM_NONE],[1],[GRID_MPI3_SHM_NONE] )
;;
hugetlbfs)
AC_DEFINE([GRID_MPI3_SHMMMAP],[1],[GRID_MPI3_SHMMMAP] )
;;
@ -359,7 +370,7 @@ esac
AC_ARG_ENABLE([shmpath],[AC_HELP_STRING([--enable-shmpath=path],
[Select SHM mmap base path for hugetlbfs])],
[ac_SHMPATH=${enable_shmpath}],
[ac_SHMPATH=/var/lib/hugetlbfs/pagesize-2MB/])
[ac_SHMPATH=/var/lib/hugetlbfs/global/pagesize-2MB/])
AC_DEFINE_UNQUOTED([GRID_SHM_PATH],["$ac_SHMPATH"],[Path to a hugetlbfs filesystem for MMAPing])
############### communication type selection
@ -469,8 +480,8 @@ GRID_LIBS=$LIBS
GRID_SHORT_SHA=`git rev-parse --short HEAD`
GRID_SHA=`git rev-parse HEAD`
GRID_BRANCH=`git rev-parse --abbrev-ref HEAD`
AM_CXXFLAGS="-I${abs_srcdir}/include $AM_CXXFLAGS"
AM_CFLAGS="-I${abs_srcdir}/include $AM_CFLAGS"
AM_CXXFLAGS="-I${abs_srcdir}/include -I${abs_srcdir}/Eigen/ -I${abs_srcdir}/Eigen/unsupported $AM_CXXFLAGS"
AM_CFLAGS="-I${abs_srcdir}/include -I${abs_srcdir}/Eigen/ -I${abs_srcdir}/Eigen/unsupported $AM_CFLAGS"
AM_LDFLAGS="-L${cwd}/lib $AM_LDFLAGS"
AC_SUBST([AM_CFLAGS])
AC_SUBST([AM_CXXFLAGS])

View File

@ -0,0 +1,146 @@
#ifndef A2A_Reduction_hpp_
#define A2A_Reduction_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Environment.hpp>
#include <Grid/Hadrons/Solver.hpp>
BEGIN_HADRONS_NAMESPACE
////////////////////////////////////////////
// A2A Meson Field Inner Product
////////////////////////////////////////////
template <class FermionField>
void sliceInnerProductMesonField(std::vector<std::vector<ComplexD>> &mat,
const std::vector<Lattice<FermionField>> &lhs,
const std::vector<Lattice<FermionField>> &rhs,
int orthogdim)
{
typedef typename FermionField::scalar_type scalar_type;
typedef typename FermionField::vector_type vector_type;
int Lblock = lhs.size();
int Rblock = rhs.size();
GridBase *grid = lhs[0]._grid;
const int Nd = grid->_ndimension;
const int Nsimd = grid->Nsimd();
int Nt = grid->GlobalDimensions()[orthogdim];
assert(mat.size() == Lblock * Rblock);
for (int t = 0; t < mat.size(); t++)
{
assert(mat[t].size() == Nt);
}
int fd = grid->_fdimensions[orthogdim];
int ld = grid->_ldimensions[orthogdim];
int rd = grid->_rdimensions[orthogdim];
// will locally sum vectors first
// sum across these down to scalars
// splitting the SIMD
std::vector<vector_type, alignedAllocator<vector_type>> lvSum(rd * Lblock * Rblock);
for(int r=0;r<rd * Lblock * Rblock;r++)
{
lvSum[r]=zero;
}
std::vector<scalar_type> lsSum(ld * Lblock * Rblock, scalar_type(0.0));
int e1 = grid->_slice_nblock[orthogdim];
int e2 = grid->_slice_block[orthogdim];
int stride = grid->_slice_stride[orthogdim];
// std::cout << GridLogMessage << " Entering first parallel loop " << std::endl;
// Parallelise over t-direction doesn't expose as much parallelism as needed for KNL
parallel_for(int r = 0; r < rd; r++)
{
int so = r * grid->_ostride[orthogdim]; // base offset for start of plane
for (int n = 0; n < e1; n++)
{
for (int b = 0; b < e2; b++)
{
int ss = so + n * stride + b;
for (int i = 0; i < Lblock; i++)
{
auto left = conjugate(lhs[i]._odata[ss]);
for (int j = 0; j < Rblock; j++)
{
int idx = i + Lblock * j + Lblock * Rblock * r;
auto right = rhs[j]._odata[ss];
vector_type vv = left()(0)(0) * right()(0)(0)
+ left()(0)(1) * right()(0)(1)
+ left()(0)(2) * right()(0)(2)
+ left()(1)(0) * right()(1)(0)
+ left()(1)(1) * right()(1)(1)
+ left()(1)(2) * right()(1)(2)
+ left()(2)(0) * right()(2)(0)
+ left()(2)(1) * right()(2)(1)
+ left()(2)(2) * right()(2)(2)
+ left()(3)(0) * right()(3)(0)
+ left()(3)(1) * right()(3)(1)
+ left()(3)(2) * right()(3)(2);
lvSum[idx] = lvSum[idx] + vv;
}
}
}
}
}
// std::cout << GridLogMessage << " Entering second parallel loop " << std::endl;
// Sum across simd lanes in the plane, breaking out orthog dir.
parallel_for(int rt = 0; rt < rd; rt++)
{
std::vector<int> icoor(Nd);
for (int i = 0; i < Lblock; i++)
{
for (int j = 0; j < Rblock; j++)
{
iScalar<vector_type> temp;
std::vector<iScalar<scalar_type>> extracted(Nsimd);
temp._internal = lvSum[i + Lblock * j + Lblock * Rblock * rt];
extract(temp, extracted);
for (int idx = 0; idx < Nsimd; idx++)
{
grid->iCoorFromIindex(icoor, idx);
int ldx = rt + icoor[orthogdim] * rd;
int ij_dx = i + Lblock * j + Lblock * Rblock * ldx;
lsSum[ij_dx] = lsSum[ij_dx] + extracted[idx]._internal;
}
}
}
}
// std::cout << GridLogMessage << " Entering non parallel loop " << std::endl;
for (int t = 0; t < fd; t++)
{
int pt = t/ld; // processor plane
int lt = t%ld;
for (int i = 0; i < Lblock; i++)
{
for (int j = 0; j < Rblock; j++)
{
if (pt == grid->_processor_coor[orthogdim])
{
int ij_dx = i + Lblock * j + Lblock * Rblock * lt;
mat[i + j * Lblock][t] = lsSum[ij_dx];
}
else
{
mat[i + j * Lblock][t] = scalar_type(0.0);
}
}
}
}
// std::cout << GridLogMessage << " Done " << std::endl;
// defer sum over nodes.
return;
}
END_HADRONS_NAMESPACE
#endif // A2A_Reduction_hpp_

View File

@ -0,0 +1,210 @@
#ifndef A2A_Vectors_hpp_
#define A2A_Vectors_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Environment.hpp>
#include <Grid/Hadrons/Solver.hpp>
BEGIN_HADRONS_NAMESPACE
////////////////////////////////
// A2A Modes
////////////////////////////////
template <class Field, class Matrix, class Solver>
class A2AModesSchurDiagTwo
{
private:
const std::vector<Field> *evec;
const std::vector<RealD> *eval;
Matrix &action;
Solver &solver;
std::vector<Field> w_high_5d, v_high_5d, w_high_4d, v_high_4d;
const int Nl, Nh;
const bool return_5d;
public:
int getNl (void ) {return Nl;}
int getNh (void ) {return Nh;}
int getN (void ) {return Nh+Nl;}
A2AModesSchurDiagTwo(const std::vector<Field> *_evec, const std::vector<RealD> *_eval,
Matrix &_action,
Solver &_solver,
std::vector<Field> _w_high_5d, std::vector<Field> _v_high_5d,
std::vector<Field> _w_high_4d, std::vector<Field> _v_high_4d,
const int _Nl, const int _Nh,
const bool _return_5d)
: evec(_evec), eval(_eval),
action(_action),
solver(_solver),
w_high_5d(_w_high_5d), v_high_5d(_v_high_5d),
w_high_4d(_w_high_4d), v_high_4d(_v_high_4d),
Nl(_Nl), Nh(_Nh),
return_5d(_return_5d){};
void high_modes(Field &source_5d, Field &w_source_5d, Field &source_4d, int i)
{
int i5d;
LOG(Message) << "A2A high modes for i = " << i << std::endl;
i5d = 0;
if (return_5d) i5d = i;
this->high_mode_v(action, solver, source_5d, v_high_5d[i5d], v_high_4d[i]);
this->high_mode_w(w_source_5d, source_4d, w_high_5d[i5d], w_high_4d[i]);
}
void return_v(int i, Field &vout_5d, Field &vout_4d)
{
if (i < Nl)
{
this->low_mode_v(action, evec->at(i), eval->at(i), vout_5d, vout_4d);
}
else
{
vout_4d = v_high_4d[i - Nl];
if (!(return_5d)) i = Nl;
vout_5d = v_high_5d[i - Nl];
}
}
void return_w(int i, Field &wout_5d, Field &wout_4d)
{
if (i < Nl)
{
this->low_mode_w(action, evec->at(i), eval->at(i), wout_5d, wout_4d);
}
else
{
wout_4d = w_high_4d[i - Nl];
if (!(return_5d)) i = Nl;
wout_5d = w_high_5d[i - Nl];
}
}
void low_mode_v(Matrix &action, const Field &evec, const RealD &eval, Field &vout_5d, Field &vout_4d)
{
GridBase *grid = action.RedBlackGrid();
Field src_o(grid);
Field sol_e(grid);
Field sol_o(grid);
Field tmp(grid);
src_o = evec;
src_o.checkerboard = Odd;
pickCheckerboard(Even, sol_e, vout_5d);
pickCheckerboard(Odd, sol_o, vout_5d);
/////////////////////////////////////////////////////
// v_ie = -(1/eval_i) * MeeInv Meo MooInv evec_i
/////////////////////////////////////////////////////
action.MooeeInv(src_o, tmp);
assert(tmp.checkerboard == Odd);
action.Meooe(tmp, sol_e);
assert(sol_e.checkerboard == Even);
action.MooeeInv(sol_e, tmp);
assert(tmp.checkerboard == Even);
sol_e = (-1.0 / eval) * tmp;
assert(sol_e.checkerboard == Even);
/////////////////////////////////////////////////////
// v_io = (1/eval_i) * MooInv evec_i
/////////////////////////////////////////////////////
action.MooeeInv(src_o, tmp);
assert(tmp.checkerboard == Odd);
sol_o = (1.0 / eval) * tmp;
assert(sol_o.checkerboard == Odd);
setCheckerboard(vout_5d, sol_e);
assert(sol_e.checkerboard == Even);
setCheckerboard(vout_5d, sol_o);
assert(sol_o.checkerboard == Odd);
action.ExportPhysicalFermionSolution(vout_5d, vout_4d);
}
void low_mode_w(Matrix &action, const Field &evec, const RealD &eval, Field &wout_5d, Field &wout_4d)
{
GridBase *grid = action.RedBlackGrid();
SchurDiagTwoOperator<Matrix, Field> _HermOpEO(action);
Field src_o(grid);
Field sol_e(grid);
Field sol_o(grid);
Field tmp(grid);
GridBase *fgrid = action.Grid();
Field tmp_wout(fgrid);
src_o = evec;
src_o.checkerboard = Odd;
pickCheckerboard(Even, sol_e, tmp_wout);
pickCheckerboard(Odd, sol_o, tmp_wout);
/////////////////////////////////////////////////////
// w_ie = - MeeInvDag MoeDag Doo evec_i
/////////////////////////////////////////////////////
_HermOpEO.Mpc(src_o, tmp);
assert(tmp.checkerboard == Odd);
action.MeooeDag(tmp, sol_e);
assert(sol_e.checkerboard == Even);
action.MooeeInvDag(sol_e, tmp);
assert(tmp.checkerboard == Even);
sol_e = (-1.0) * tmp;
/////////////////////////////////////////////////////
// w_io = Doo evec_i
/////////////////////////////////////////////////////
_HermOpEO.Mpc(src_o, sol_o);
assert(sol_o.checkerboard == Odd);
setCheckerboard(tmp_wout, sol_e);
assert(sol_e.checkerboard == Even);
setCheckerboard(tmp_wout, sol_o);
assert(sol_o.checkerboard == Odd);
action.DminusDag(tmp_wout, wout_5d);
action.ExportPhysicalFermionSource(wout_5d, wout_4d);
}
void high_mode_v(Matrix &action, Solver &solver, const Field &source, Field &vout_5d, Field &vout_4d)
{
GridBase *fgrid = action.Grid();
solver(vout_5d, source); // Note: solver is solver(out, in)
action.ExportPhysicalFermionSolution(vout_5d, vout_4d);
}
void high_mode_w(const Field &w_source_5d, const Field &source_4d, Field &wout_5d, Field &wout_4d)
{
wout_5d = w_source_5d;
wout_4d = source_4d;
}
};
// TODO: A2A for coarse eigenvectors
// template <class FineField, class CoarseField, class Matrix, class Solver>
// class A2ALMSchurDiagTwoCoarse : public A2AModesSchurDiagTwo<FineField, Matrix, Solver>
// {
// private:
// const std::vector<FineField> &subspace;
// const std::vector<CoarseField> &evec_coarse;
// const std::vector<RealD> &eval_coarse;
// Matrix &action;
// public:
// A2ALMSchurDiagTwoCoarse(const std::vector<FineField> &_subspace, const std::vector<CoarseField> &_evec_coarse, const std::vector<RealD> &_eval_coarse, Matrix &_action)
// : subspace(_subspace), evec_coarse(_evec_coarse), eval_coarse(_eval_coarse), action(_action){};
// void operator()(int i, FineField &vout, FineField &wout)
// {
// FineField prom_evec(subspace[0]._grid);
// blockPromote(evec_coarse[i], prom_evec, subspace);
// this->low_mode_v(action, prom_evec, eval_coarse[i], vout);
// this->low_mode_w(action, prom_evec, eval_coarse[i], wout);
// }
// };
END_HADRONS_NAMESPACE
#endif // A2A_Vectors_hpp_

View File

@ -28,6 +28,7 @@ See the full license in the file "LICENSE" in the top level distribution directo
#include <Grid/Hadrons/Application.hpp>
#include <Grid/Hadrons/GeneticScheduler.hpp>
#include <Grid/Hadrons/Modules.hpp>
using namespace Grid;
using namespace QCD;
@ -40,15 +41,12 @@ using namespace Hadrons;
* Application implementation *
******************************************************************************/
// constructors ////////////////////////////////////////////////////////////////
#define MACOUT(macro) macro << " (" << #macro << ")"
#define MACOUTS(macro) HADRONS_STR(macro) << " (" << #macro << ")"
Application::Application(void)
{
initLogger();
LOG(Message) << "Modules available:" << std::endl;
auto list = ModuleFactory::getInstance().getBuilderList();
for (auto &m: list)
{
LOG(Message) << " " << m << std::endl;
}
auto dim = GridDefaultLatt(), mpi = GridDefaultMpi(), loc(dim);
locVol_ = 1;
for (unsigned int d = 0; d < dim.size(); ++d)
@ -56,9 +54,22 @@ Application::Application(void)
loc[d] /= mpi[d];
locVol_ *= loc[d];
}
LOG(Message) << "Global lattice: " << dim << std::endl;
LOG(Message) << "MPI partition : " << mpi << std::endl;
LOG(Message) << "Local lattice : " << loc << std::endl;
LOG(Message) << "====== HADRONS APPLICATION STARTING ======" << std::endl;
LOG(Message) << "** Dimensions" << std::endl;
LOG(Message) << "Global lattice : " << dim << std::endl;
LOG(Message) << "MPI partition : " << mpi << std::endl;
LOG(Message) << "Local lattice : " << loc << std::endl;
LOG(Message) << std::endl;
LOG(Message) << "** Default parameters (and associated C macro)" << std::endl;
LOG(Message) << "ASCII output precision : " << MACOUT(DEFAULT_ASCII_PREC) << std::endl;
LOG(Message) << "Fermion implementation : " << MACOUTS(FIMPL) << std::endl;
LOG(Message) << "z-Fermion implementation: " << MACOUTS(ZFIMPL) << std::endl;
LOG(Message) << "Scalar implementation : " << MACOUTS(SIMPL) << std::endl;
LOG(Message) << "Gauge implementation : " << MACOUTS(GIMPL) << std::endl;
LOG(Message) << "Eigenvector base size : "
<< MACOUT(HADRONS_DEFAULT_LANCZOS_NBASIS) << std::endl;
LOG(Message) << "Schur decomposition : " << MACOUTS(HADRONS_DEFAULT_SCHUR) << std::endl;
LOG(Message) << std::endl;
}
Application::Application(const Application::GlobalPar &par)
@ -119,12 +130,12 @@ void Application::parseParameterFile(const std::string parameterFileName)
setPar(par);
if (!push(reader, "modules"))
{
HADRON_ERROR(Parsing, "Cannot open node 'modules' in parameter file '"
HADRONS_ERROR(Parsing, "Cannot open node 'modules' in parameter file '"
+ parameterFileName + "'");
}
if (!push(reader, "module"))
{
HADRON_ERROR(Parsing, "Cannot open node 'modules/module' in parameter file '"
HADRONS_ERROR(Parsing, "Cannot open node 'modules/module' in parameter file '"
+ parameterFileName + "'");
}
do
@ -138,24 +149,27 @@ void Application::parseParameterFile(const std::string parameterFileName)
void Application::saveParameterFile(const std::string parameterFileName)
{
XmlWriter writer(parameterFileName);
ObjectId id;
const unsigned int nMod = vm().getNModule();
LOG(Message) << "Saving application to '" << parameterFileName << "'..." << std::endl;
write(writer, "parameters", getPar());
push(writer, "modules");
for (unsigned int i = 0; i < nMod; ++i)
if (env().getGrid()->IsBoss())
{
push(writer, "module");
id.name = vm().getModuleName(i);
id.type = vm().getModule(i)->getRegisteredName();
write(writer, "id", id);
vm().getModule(i)->saveParameters(writer, "options");
XmlWriter writer(parameterFileName);
ObjectId id;
const unsigned int nMod = vm().getNModule();
write(writer, "parameters", getPar());
push(writer, "modules");
for (unsigned int i = 0; i < nMod; ++i)
{
push(writer, "module");
id.name = vm().getModuleName(i);
id.type = vm().getModule(i)->getRegisteredName();
write(writer, "id", id);
vm().getModule(i)->saveParameters(writer, "options");
pop(writer);
}
pop(writer);
pop(writer);
}
pop(writer);
pop(writer);
}
// schedule computation ////////////////////////////////////////////////////////
@ -170,20 +184,24 @@ void Application::schedule(void)
void Application::saveSchedule(const std::string filename)
{
TextWriter writer(filename);
std::vector<std::string> program;
if (!scheduled_)
{
HADRON_ERROR(Definition, "Computation not scheduled");
}
LOG(Message) << "Saving current schedule to '" << filename << "'..."
<< std::endl;
for (auto address: program_)
if (env().getGrid()->IsBoss())
{
program.push_back(vm().getModuleName(address));
TextWriter writer(filename);
std::vector<std::string> program;
if (!scheduled_)
{
HADRONS_ERROR(Definition, "Computation not scheduled");
}
for (auto address: program_)
{
program.push_back(vm().getModuleName(address));
}
write(writer, "schedule", program);
}
write(writer, "schedule", program);
}
void Application::loadSchedule(const std::string filename)
@ -206,7 +224,7 @@ void Application::printSchedule(void)
{
if (!scheduled_)
{
HADRON_ERROR(Definition, "Computation not scheduled");
HADRONS_ERROR(Definition, "Computation not scheduled");
}
auto peak = vm().memoryNeeded(program_);
LOG(Message) << "Schedule (memory needed: " << sizeString(peak) << "):"

View File

@ -31,7 +31,7 @@ See the full license in the file "LICENSE" in the top level distribution directo
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/VirtualMachine.hpp>
#include <Grid/Hadrons/Modules.hpp>
#include <Grid/Hadrons/Module.hpp>
BEGIN_HADRONS_NAMESPACE

View File

@ -0,0 +1,323 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/EigenPack.hpp
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#ifndef Hadrons_EigenPack_hpp_
#define Hadrons_EigenPack_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/algorithms/iterative/Deflation.h>
#include <Grid/algorithms/iterative/LocalCoherenceLanczos.h>
BEGIN_HADRONS_NAMESPACE
// Lanczos type
#ifndef HADRONS_DEFAULT_LANCZOS_NBASIS
#define HADRONS_DEFAULT_LANCZOS_NBASIS 60
#endif
template <typename F>
class EigenPack
{
public:
typedef F Field;
struct PackRecord
{
std::string operatorXml, solverXml;
};
struct VecRecord: Serializable
{
GRID_SERIALIZABLE_CLASS_MEMBERS(VecRecord,
unsigned int, index,
double, eval);
VecRecord(void): index(0), eval(0.) {}
};
public:
std::vector<RealD> eval;
std::vector<F> evec;
PackRecord record;
public:
EigenPack(void) = default;
virtual ~EigenPack(void) = default;
EigenPack(const size_t size, GridBase *grid)
{
resize(size, grid);
}
void resize(const size_t size, GridBase *grid)
{
eval.resize(size);
evec.resize(size, grid);
}
virtual void read(const std::string fileStem, const bool multiFile, const int traj = -1)
{
if (multiFile)
{
for(int k = 0; k < evec.size(); ++k)
{
basicReadSingle(evec[k], eval[k], evecFilename(fileStem, k, traj), k);
}
}
else
{
basicRead(evec, eval, evecFilename(fileStem, -1, traj), evec.size());
}
}
virtual void write(const std::string fileStem, const bool multiFile, const int traj = -1)
{
if (multiFile)
{
for(int k = 0; k < evec.size(); ++k)
{
basicWriteSingle(evecFilename(fileStem, k, traj), evec[k], eval[k], k);
}
}
else
{
basicWrite(evecFilename(fileStem, -1, traj), evec, eval, evec.size());
}
}
protected:
std::string evecFilename(const std::string stem, const int vec, const int traj)
{
std::string t = (traj < 0) ? "" : ("." + std::to_string(traj));
if (vec == -1)
{
return stem + t + ".bin";
}
else
{
return stem + t + "/v" + std::to_string(vec) + ".bin";
}
}
template <typename T>
void basicRead(std::vector<T> &evec, std::vector<double> &eval,
const std::string filename, const unsigned int size)
{
ScidacReader binReader;
binReader.open(filename);
binReader.skipPastObjectRecord(SCIDAC_FILE_XML);
for(int k = 0; k < size; ++k)
{
VecRecord vecRecord;
LOG(Message) << "Reading eigenvector " << k << std::endl;
binReader.readScidacFieldRecord(evec[k], vecRecord);
if (vecRecord.index != k)
{
HADRONS_ERROR(Io, "Eigenvector " + std::to_string(k) + " has a"
+ " wrong index (expected " + std::to_string(vecRecord.index)
+ ") in file '" + filename + "'");
}
eval[k] = vecRecord.eval;
}
binReader.close();
}
template <typename T>
void basicReadSingle(T &evec, double &eval, const std::string filename,
const unsigned int index)
{
ScidacReader binReader;
VecRecord vecRecord;
binReader.open(filename);
binReader.skipPastObjectRecord(SCIDAC_FILE_XML);
LOG(Message) << "Reading eigenvector " << index << std::endl;
binReader.readScidacFieldRecord(evec, vecRecord);
if (vecRecord.index != index)
{
HADRONS_ERROR(Io, "Eigenvector " + std::to_string(index) + " has a"
+ " wrong index (expected " + std::to_string(vecRecord.index)
+ ") in file '" + filename + "'");
}
eval = vecRecord.eval;
binReader.close();
}
template <typename T>
void basicWrite(const std::string filename, std::vector<T> &evec,
const std::vector<double> &eval, const unsigned int size)
{
ScidacWriter binWriter(evec[0]._grid->IsBoss());
XmlWriter xmlWriter("", "eigenPackPar");
makeFileDir(filename, evec[0]._grid);
xmlWriter.pushXmlString(record.operatorXml);
xmlWriter.pushXmlString(record.solverXml);
binWriter.open(filename);
binWriter.writeLimeObject(1, 1, xmlWriter, "parameters", SCIDAC_FILE_XML);
for(int k = 0; k < size; ++k)
{
VecRecord vecRecord;
vecRecord.index = k;
vecRecord.eval = eval[k];
LOG(Message) << "Writing eigenvector " << k << std::endl;
binWriter.writeScidacFieldRecord(evec[k], vecRecord, DEFAULT_ASCII_PREC);
}
binWriter.close();
}
template <typename T>
void basicWriteSingle(const std::string filename, T &evec,
const double eval, const unsigned int index)
{
ScidacWriter binWriter(evec._grid->IsBoss());
XmlWriter xmlWriter("", "eigenPackPar");
VecRecord vecRecord;
makeFileDir(filename, evec._grid);
xmlWriter.pushXmlString(record.operatorXml);
xmlWriter.pushXmlString(record.solverXml);
binWriter.open(filename);
binWriter.writeLimeObject(1, 1, xmlWriter, "parameters", SCIDAC_FILE_XML);
vecRecord.index = index;
vecRecord.eval = eval;
LOG(Message) << "Writing eigenvector " << index << std::endl;
binWriter.writeScidacFieldRecord(evec, vecRecord, DEFAULT_ASCII_PREC);
binWriter.close();
}
};
template <typename FineF, typename CoarseF>
class CoarseEigenPack: public EigenPack<FineF>
{
public:
typedef CoarseF CoarseField;
public:
std::vector<RealD> evalCoarse;
std::vector<CoarseF> evecCoarse;
public:
CoarseEigenPack(void) = default;
virtual ~CoarseEigenPack(void) = default;
CoarseEigenPack(const size_t sizeFine, const size_t sizeCoarse,
GridBase *gridFine, GridBase *gridCoarse)
{
resize(sizeFine, sizeCoarse, gridFine, gridCoarse);
}
void resize(const size_t sizeFine, const size_t sizeCoarse,
GridBase *gridFine, GridBase *gridCoarse)
{
EigenPack<FineF>::resize(sizeFine, gridFine);
evalCoarse.resize(sizeCoarse);
evecCoarse.resize(sizeCoarse, gridCoarse);
}
void readFine(const std::string fileStem, const bool multiFile, const int traj = -1)
{
if (multiFile)
{
for(int k = 0; k < this->evec.size(); ++k)
{
this->basicReadSingle(this->evec[k], this->eval[k], this->evecFilename(fileStem + "_fine", k, traj), k);
}
}
else
{
this->basicRead(this->evec, this->eval, this->evecFilename(fileStem + "_fine", -1, traj), this->evec.size());
}
}
void readCoarse(const std::string fileStem, const bool multiFile, const int traj = -1)
{
if (multiFile)
{
for(int k = 0; k < evecCoarse.size(); ++k)
{
this->basicReadSingle(evecCoarse[k], evalCoarse[k], this->evecFilename(fileStem + "_coarse", k, traj), k);
}
}
else
{
this->basicRead(evecCoarse, evalCoarse, this->evecFilename(fileStem + "_coarse", -1, traj), evecCoarse.size());
}
}
virtual void read(const std::string fileStem, const bool multiFile, const int traj = -1)
{
readFine(fileStem, multiFile, traj);
readCoarse(fileStem, multiFile, traj);
}
void writeFine(const std::string fileStem, const bool multiFile, const int traj = -1)
{
if (multiFile)
{
for(int k = 0; k < this->evec.size(); ++k)
{
this->basicWriteSingle(this->evecFilename(fileStem + "_fine", k, traj), this->evec[k], this->eval[k], k);
}
}
else
{
this->basicWrite(this->evecFilename(fileStem + "_fine", -1, traj), this->evec, this->eval, this->evec.size());
}
}
void writeCoarse(const std::string fileStem, const bool multiFile, const int traj = -1)
{
if (multiFile)
{
for(int k = 0; k < evecCoarse.size(); ++k)
{
this->basicWriteSingle(this->evecFilename(fileStem + "_coarse", k, traj), evecCoarse[k], evalCoarse[k], k);
}
}
else
{
this->basicWrite(this->evecFilename(fileStem + "_coarse", -1, traj), evecCoarse, evalCoarse, evecCoarse.size());
}
}
virtual void write(const std::string fileStem, const bool multiFile, const int traj = -1)
{
writeFine(fileStem, multiFile, traj);
writeCoarse(fileStem, multiFile, traj);
}
};
template <typename FImpl>
using FermionEigenPack = EigenPack<typename FImpl::FermionField>;
template <typename FImpl, int nBasis>
using CoarseFermionEigenPack = CoarseEigenPack<
typename FImpl::FermionField,
typename LocalCoherenceLanczos<typename FImpl::SiteSpinor,
typename FImpl::SiteComplex,
nBasis>::CoarseField>;
END_HADRONS_NAMESPACE
#endif // Hadrons_EigenPack_hpp_

View File

@ -35,7 +35,7 @@ using namespace QCD;
using namespace Hadrons;
#define ERROR_NO_ADDRESS(address)\
HADRON_ERROR(Definition, "no object with address " + std::to_string(address));
HADRONS_ERROR(Definition, "no object with address " + std::to_string(address));
/******************************************************************************
* Environment implementation *
@ -49,11 +49,10 @@ Environment::Environment(void)
dim_, GridDefaultSimd(nd_, vComplex::Nsimd()),
GridDefaultMpi()));
gridRb4d_.reset(SpaceTimeGrid::makeFourDimRedBlackGrid(grid4d_.get()));
auto loc = getGrid()->LocalDimensions();
locVol_ = 1;
for (unsigned int d = 0; d < loc.size(); ++d)
vol_ = 1.;
for (auto d: dim_)
{
locVol_ *= loc[d];
vol_ *= d;
}
rng4d_.reset(new GridParallelRNG(grid4d_.get()));
}
@ -61,7 +60,7 @@ Environment::Environment(void)
// grids ///////////////////////////////////////////////////////////////////////
void Environment::createGrid(const unsigned int Ls)
{
if (grid5d_.find(Ls) == grid5d_.end())
if ((Ls > 1) and (grid5d_.find(Ls) == grid5d_.end()))
{
auto g = getGrid();
@ -70,6 +69,49 @@ void Environment::createGrid(const unsigned int Ls)
}
}
void Environment::createCoarseGrid(const std::vector<int> &blockSize,
const unsigned int Ls)
{
int nd = getNd();
std::vector<int> fineDim = getDim(), coarseDim;
unsigned int cLs;
auto key4d = blockSize, key5d = blockSize;
createGrid(Ls);
coarseDim.resize(nd);
for (int d = 0; d < coarseDim.size(); d++)
{
coarseDim[d] = fineDim[d]/blockSize[d];
if (coarseDim[d]*blockSize[d] != fineDim[d])
{
HADRONS_ERROR(Size, "Fine dimension " + std::to_string(d)
+ " (" + std::to_string(fineDim[d])
+ ") not divisible by coarse dimension ("
+ std::to_string(coarseDim[d]) + ")");
}
}
if (blockSize.size() > nd)
{
cLs = Ls/blockSize[nd];
if (cLs*blockSize[nd] != Ls)
{
HADRONS_ERROR(Size, "Fine Ls (" + std::to_string(Ls)
+ ") not divisible by coarse Ls ("
+ std::to_string(cLs) + ")");
}
key4d.resize(nd);
key5d.push_back(Ls);
}
gridCoarse4d_[key4d].reset(
SpaceTimeGrid::makeFourDimGrid(coarseDim,
GridDefaultSimd(nd, vComplex::Nsimd()), GridDefaultMpi()));
if (Ls > 1)
{
gridCoarse5d_[key5d].reset(
SpaceTimeGrid::makeFiveDimGrid(cLs, gridCoarse4d_[key4d].get()));
}
}
GridCartesian * Environment::getGrid(const unsigned int Ls) const
{
try
@ -85,7 +127,7 @@ GridCartesian * Environment::getGrid(const unsigned int Ls) const
}
catch(std::out_of_range &)
{
HADRON_ERROR(Definition, "no grid with Ls= " + std::to_string(Ls));
HADRONS_ERROR(Definition, "no grid with Ls= " + std::to_string(Ls));
}
}
@ -104,7 +146,31 @@ GridRedBlackCartesian * Environment::getRbGrid(const unsigned int Ls) const
}
catch(std::out_of_range &)
{
HADRON_ERROR(Definition, "no red-black 5D grid with Ls= " + std::to_string(Ls));
HADRONS_ERROR(Definition, "no red-black grid with Ls= " + std::to_string(Ls));
}
}
GridCartesian * Environment::getCoarseGrid(
const std::vector<int> &blockSize, const unsigned int Ls) const
{
auto key = blockSize;
try
{
if (Ls == 1)
{
key.resize(getNd());
return gridCoarse4d_.at(key).get();
}
else
{
key.push_back(Ls);
return gridCoarse5d_.at(key).get();
}
}
catch(std::out_of_range &)
{
HADRONS_ERROR(Definition, "no coarse grid with Ls= " + std::to_string(Ls));
}
}
@ -123,9 +189,9 @@ int Environment::getDim(const unsigned int mu) const
return dim_[mu];
}
unsigned long int Environment::getLocalVolume(void) const
double Environment::getVolume(void) const
{
return locVol_;
return vol_;
}
// random number generator /////////////////////////////////////////////////////
@ -154,7 +220,7 @@ void Environment::addObject(const std::string name, const int moduleAddress)
}
else
{
HADRON_ERROR(Definition, "object '" + name + "' already exists");
HADRONS_ERROR(Definition, "object '" + name + "' already exists");
}
}
@ -177,7 +243,7 @@ unsigned int Environment::getObjectAddress(const std::string name) const
}
else
{
HADRON_ERROR(Definition, "no object with name '" + name + "'");
HADRONS_ERROR(Definition, "no object with name '" + name + "'");
}
}
@ -270,7 +336,7 @@ int Environment::getObjectModule(const std::string name) const
unsigned int Environment::getObjectLs(const unsigned int address) const
{
if (hasObject(address))
if (hasCreatedObject(address))
{
return object_[address].Ls;
}

View File

@ -78,7 +78,7 @@ private:
Size size{0};
Storage storage{Storage::object};
unsigned int Ls{0};
const std::type_info *type{nullptr};
const std::type_info *type{nullptr}, *derivedType{nullptr};
std::string name;
int module{-1};
std::unique_ptr<Object> data{nullptr};
@ -86,12 +86,16 @@ private:
public:
// grids
void createGrid(const unsigned int Ls);
void createCoarseGrid(const std::vector<int> &blockSize,
const unsigned int Ls = 1);
GridCartesian * getGrid(const unsigned int Ls = 1) const;
GridRedBlackCartesian * getRbGrid(const unsigned int Ls = 1) const;
GridCartesian * getCoarseGrid(const std::vector<int> &blockSize,
const unsigned int Ls = 1) const;
std::vector<int> getDim(void) const;
int getDim(const unsigned int mu) const;
unsigned long int getLocalVolume(void) const;
unsigned int getNd(void) const;
double getVolume(void) const;
// random number generator
void setSeed(const std::vector<int> &seed);
GridParallelRNG * get4dRng(void) const;
@ -110,6 +114,10 @@ public:
Ts && ... args);
void setObjectModule(const unsigned int objAddress,
const int modAddress);
template <typename B, typename T>
T * getDerivedObject(const unsigned int address) const;
template <typename B, typename T>
T * getDerivedObject(const std::string name) const;
template <typename T>
T * getObject(const unsigned int address) const;
template <typename T>
@ -147,7 +155,7 @@ public:
void printContent(void) const;
private:
// general
unsigned long int locVol_;
double vol_;
bool protect_{true};
// grids
std::vector<int> dim_;
@ -155,6 +163,8 @@ private:
std::map<unsigned int, GridPt> grid5d_;
GridRbPt gridRb4d_;
std::map<unsigned int, GridRbPt> gridRb5d_;
std::map<std::vector<int>, GridPt> gridCoarse4d_;
std::map<std::vector<int>, GridPt> gridCoarse5d_;
unsigned int nd_;
// random number generator
RngPt rng4d_;
@ -176,7 +186,7 @@ Holder<T>::Holder(T *pt)
template <typename T>
T & Holder<T>::get(void) const
{
return &objPt_.get();
return *objPt_.get();
}
template <typename T>
@ -216,24 +226,26 @@ void Environment::createDerivedObject(const std::string name,
{
MemoryProfiler::stats = &memStats;
}
size_t initMem = MemoryProfiler::stats->currentlyAllocated;
object_[address].storage = storage;
object_[address].Ls = Ls;
size_t initMem = MemoryProfiler::stats->currentlyAllocated;
object_[address].storage = storage;
object_[address].Ls = Ls;
object_[address].data.reset(new Holder<B>(new T(std::forward<Ts>(args)...)));
object_[address].size = MemoryProfiler::stats->maxAllocated - initMem;
object_[address].type = &typeid(T);
object_[address].size = MemoryProfiler::stats->maxAllocated - initMem;
object_[address].type = &typeid(B);
object_[address].derivedType = &typeid(T);
if (MemoryProfiler::stats == &memStats)
{
MemoryProfiler::stats = nullptr;
}
}
// object already exists, no error if it is a cache, error otherwise
else if ((object_[address].storage != Storage::cache) or
(object_[address].storage != storage) or
(object_[address].name != name) or
(object_[address].type != &typeid(T)))
else if ((object_[address].storage != Storage::cache) or
(object_[address].storage != storage) or
(object_[address].name != name) or
(object_[address].type != &typeid(B)) or
(object_[address].derivedType != &typeid(T)))
{
HADRON_ERROR(Definition, "object '" + name + "' already allocated");
HADRONS_ERROR(Definition, "object '" + name + "' already allocated");
}
}
@ -246,36 +258,64 @@ void Environment::createObject(const std::string name,
createDerivedObject<T, T>(name, storage, Ls, std::forward<Ts>(args)...);
}
template <typename T>
T * Environment::getObject(const unsigned int address) const
template <typename B, typename T>
T * Environment::getDerivedObject(const unsigned int address) const
{
if (hasObject(address))
{
if (hasCreatedObject(address))
{
if (auto h = dynamic_cast<Holder<T> *>(object_[address].data.get()))
if (auto h = dynamic_cast<Holder<B> *>(object_[address].data.get()))
{
return h->getPt();
if (&typeid(T) == &typeid(B))
{
return dynamic_cast<T *>(h->getPt());
}
else
{
if (auto hder = dynamic_cast<T *>(h->getPt()))
{
return hder;
}
else
{
HADRONS_ERROR(Definition, "object with address " + std::to_string(address) +
" cannot be casted to '" + typeName(&typeid(T)) +
"' (has type '" + typeName(&typeid(h->get())) + "')");
}
}
}
else
{
HADRON_ERROR(Definition, "object with address " + std::to_string(address) +
" does not have type '" + typeName(&typeid(T)) +
HADRONS_ERROR(Definition, "object with address " + std::to_string(address) +
" does not have type '" + typeName(&typeid(B)) +
"' (has type '" + getObjectType(address) + "')");
}
}
else
{
HADRON_ERROR(Definition, "object with address " + std::to_string(address) +
HADRONS_ERROR(Definition, "object with address " + std::to_string(address) +
" is empty");
}
}
else
{
HADRON_ERROR(Definition, "no object with address " + std::to_string(address));
HADRONS_ERROR(Definition, "no object with address " + std::to_string(address));
}
}
template <typename B, typename T>
T * Environment::getDerivedObject(const std::string name) const
{
return getDerivedObject<B, T>(getObjectAddress(name));
}
template <typename T>
T * Environment::getObject(const unsigned int address) const
{
return getDerivedObject<T, T>(address);
}
template <typename T>
T * Environment::getObject(const std::string name) const
{
@ -298,7 +338,7 @@ bool Environment::isObjectOfType(const unsigned int address) const
}
else
{
HADRON_ERROR(Definition, "no object with address " + std::to_string(address));
HADRONS_ERROR(Definition, "no object with address " + std::to_string(address));
}
}

View File

@ -27,6 +27,8 @@ See the full license in the file "LICENSE" in the top level distribution directo
/* END LEGAL */
#include <Grid/Hadrons/Exceptions.hpp>
#include <Grid/Hadrons/VirtualMachine.hpp>
#include <Grid/Hadrons/Module.hpp>
#ifndef ERR_SUFF
#define ERR_SUFF " (" + loc + ")"
@ -47,6 +49,7 @@ CONST_EXC(Definition, Logic("definition error: " + msg, loc))
CONST_EXC(Implementation, Logic("implementation error: " + msg, loc))
CONST_EXC(Range, Logic("range error: " + msg, loc))
CONST_EXC(Size, Logic("size error: " + msg, loc))
// runtime errors
CONST_EXC(Runtime, runtime_error(msg + ERR_SUFF))
CONST_EXC(Argument, Runtime("argument error: " + msg, loc))
@ -55,3 +58,24 @@ CONST_EXC(Memory, Runtime("memory error: " + msg, loc))
CONST_EXC(Parsing, Runtime("parsing error: " + msg, loc))
CONST_EXC(Program, Runtime("program error: " + msg, loc))
CONST_EXC(System, Runtime("system error: " + msg, loc))
// abort functions
void Grid::Hadrons::Exceptions::abort(const std::exception& e)
{
auto &vm = VirtualMachine::getInstance();
int mod = vm.getCurrentModule();
LOG(Error) << "FATAL ERROR -- Exception " << typeName(&typeid(e))
<< std::endl;
if (mod >= 0)
{
LOG(Error) << "During execution of module '"
<< vm.getModuleName(mod) << "' (address " << mod << ")"
<< std::endl;
}
LOG(Error) << e.what() << std::endl;
LOG(Error) << "Aborting program" << std::endl;
Grid_finalize();
exit(EXIT_FAILURE);
}

View File

@ -34,11 +34,10 @@ See the full license in the file "LICENSE" in the top level distribution directo
#include <Grid/Hadrons/Global.hpp>
#endif
#define SRC_LOC std::string(__FUNCTION__) + " at " + std::string(__FILE__) + ":"\
+ std::to_string(__LINE__)
#define HADRON_ERROR(exc, msg)\
LOG(Error) << msg << std::endl;\
throw(Exceptions::exc(msg, SRC_LOC));
#define HADRONS_SRC_LOC std::string(__FUNCTION__) + " at " \
+ std::string(__FILE__) + ":" + std::to_string(__LINE__)
#define HADRONS_ERROR(exc, msg)\
throw(Exceptions::exc(msg, HADRONS_SRC_LOC));
#define DECL_EXC(name, base) \
class name: public base\
@ -57,6 +56,7 @@ namespace Exceptions
DECL_EXC(Implementation, Logic);
DECL_EXC(Range, Logic);
DECL_EXC(Size, Logic);
// runtime errors
DECL_EXC(Runtime, std::runtime_error);
DECL_EXC(Argument, Runtime);
@ -65,6 +65,9 @@ namespace Exceptions
DECL_EXC(Parsing, Runtime);
DECL_EXC(Program, Runtime);
DECL_EXC(System, Runtime);
// abort functions
void abort(const std::exception& e);
}
END_HADRONS_NAMESPACE

View File

@ -94,7 +94,7 @@ std::unique_ptr<T> Factory<T>::create(const std::string type,
}
catch (std::out_of_range &)
{
HADRON_ERROR(Argument, "object of type '" + type + "' unknown");
HADRONS_ERROR(Argument, "object of type '" + type + "' unknown");
}
return func(name);

View File

@ -57,7 +57,9 @@ public:
virtual ~GeneticScheduler(void) = default;
// access
const Gene & getMinSchedule(void);
int getMinValue(void);
V getMinValue(void);
// reset population
void initPopulation(void);
// breed a new generation
void nextGeneration(void);
// heuristic benchmarks
@ -76,8 +78,6 @@ public:
return out;
}
private:
// evolution steps
void initPopulation(void);
void doCrossover(void);
void doMutation(void);
// genetic operators
@ -116,7 +116,7 @@ GeneticScheduler<V, T>::getMinSchedule(void)
}
template <typename V, typename T>
int GeneticScheduler<V, T>::getMinValue(void)
V GeneticScheduler<V, T>::getMinValue(void)
{
return population_.begin()->first;
}
@ -130,7 +130,7 @@ void GeneticScheduler<V, T>::nextGeneration(void)
{
initPopulation();
}
LOG(Debug) << "Starting population:\n" << *this << std::endl;
//LOG(Debug) << "Starting population:\n" << *this << std::endl;
// random mutations
//PARALLEL_FOR_LOOP
@ -138,7 +138,7 @@ void GeneticScheduler<V, T>::nextGeneration(void)
{
doMutation();
}
LOG(Debug) << "After mutations:\n" << *this << std::endl;
//LOG(Debug) << "After mutations:\n" << *this << std::endl;
// mating
//PARALLEL_FOR_LOOP
@ -146,14 +146,14 @@ void GeneticScheduler<V, T>::nextGeneration(void)
{
doCrossover();
}
LOG(Debug) << "After mating:\n" << *this << std::endl;
//LOG(Debug) << "After mating:\n" << *this << std::endl;
// grim reaper
auto it = population_.begin();
std::advance(it, par_.popSize);
population_.erase(it, population_.end());
LOG(Debug) << "After grim reaper:\n" << *this << std::endl;
//LOG(Debug) << "After grim reaper:\n" << *this << std::endl;
}
// evolution steps /////////////////////////////////////////////////////////////

View File

@ -37,20 +37,38 @@ HadronsLogger Hadrons::HadronsLogWarning(1,"Warning");
HadronsLogger Hadrons::HadronsLogMessage(1,"Message");
HadronsLogger Hadrons::HadronsLogIterative(1,"Iterative");
HadronsLogger Hadrons::HadronsLogDebug(1,"Debug");
HadronsLogger Hadrons::HadronsLogIRL(1,"IRL");
void Hadrons::initLogger(void)
{
auto w = std::string("Hadrons").length();
auto w = std::string("Hadrons").length();
int cw = 8;
GridLogError.setTopWidth(w);
GridLogWarning.setTopWidth(w);
GridLogMessage.setTopWidth(w);
GridLogIterative.setTopWidth(w);
GridLogDebug.setTopWidth(w);
HadronsLogError.Active(GridLogError.isActive());
HadronsLogWarning.Active(GridLogWarning.isActive());
GridLogIRL.setTopWidth(w);
GridLogError.setChanWidth(cw);
GridLogWarning.setChanWidth(cw);
GridLogMessage.setChanWidth(cw);
GridLogIterative.setChanWidth(cw);
GridLogDebug.setChanWidth(cw);
GridLogIRL.setChanWidth(cw);
HadronsLogError.Active(true);
HadronsLogWarning.Active(true);
HadronsLogMessage.Active(GridLogMessage.isActive());
HadronsLogIterative.Active(GridLogIterative.isActive());
HadronsLogDebug.Active(GridLogDebug.isActive());
HadronsLogIRL.Active(GridLogIRL.isActive());
HadronsLogError.setChanWidth(cw);
HadronsLogWarning.setChanWidth(cw);
HadronsLogMessage.setChanWidth(cw);
HadronsLogIterative.setChanWidth(cw);
HadronsLogDebug.setChanWidth(cw);
HadronsLogIRL.setChanWidth(cw);
}
// type utilities //////////////////////////////////////////////////////////////
@ -74,3 +92,84 @@ const std::string Hadrons::resultFileExt = "h5";
#else
const std::string Hadrons::resultFileExt = "xml";
#endif
// recursive mkdir /////////////////////////////////////////////////////////////
int Hadrons::mkdir(const std::string dirName)
{
if (!dirName.empty() and access(dirName.c_str(), R_OK|W_OK|X_OK))
{
mode_t mode755;
char tmp[MAX_PATH_LENGTH];
char *p = NULL;
size_t len;
mode755 = S_IRWXU|S_IRGRP|S_IXGRP|S_IROTH|S_IXOTH;
snprintf(tmp, sizeof(tmp), "%s", dirName.c_str());
len = strlen(tmp);
if(tmp[len - 1] == '/')
{
tmp[len - 1] = 0;
}
for(p = tmp + 1; *p; p++)
{
if(*p == '/')
{
*p = 0;
::mkdir(tmp, mode755);
*p = '/';
}
}
return ::mkdir(tmp, mode755);
}
else
{
return 0;
}
}
std::string Hadrons::basename(const std::string &s)
{
constexpr char sep = '/';
size_t i = s.rfind(sep, s.length());
if (i != std::string::npos)
{
return s.substr(i+1, s.length() - i);
}
else
{
return s;
}
}
std::string Hadrons::dirname(const std::string &s)
{
constexpr char sep = '/';
size_t i = s.rfind(sep, s.length());
if (i != std::string::npos)
{
return s.substr(0, i);
}
else
{
return "";
}
}
void Hadrons::makeFileDir(const std::string filename, GridBase *g)
{
if (g->IsBoss())
{
std::string dir = dirname(filename);
int status = mkdir(dir);
if (status)
{
HADRONS_ERROR(Io, "cannot create directory '" + dir
+ "' ( " + std::strerror(errno) + ")");
}
}
}

View File

@ -39,30 +39,40 @@ See the full license in the file "LICENSE" in the top level distribution directo
#define SITE_SIZE_TYPE size_t
#endif
#define BEGIN_HADRONS_NAMESPACE \
namespace Grid {\
using namespace QCD;\
namespace Hadrons {\
using Grid::operator<<;
#define END_HADRONS_NAMESPACE }}
#define BEGIN_MODULE_NAMESPACE(name)\
namespace name {\
using Grid::operator<<;
#define END_MODULE_NAMESPACE }
#ifndef DEFAULT_ASCII_PREC
#define DEFAULT_ASCII_PREC 16
#endif
/* the 'using Grid::operator<<;' statement prevents a very nasty compilation
* error with GCC 5 (clang & GCC 6 compile fine without it).
*/
#define BEGIN_HADRONS_NAMESPACE \
namespace Grid {\
using namespace QCD;\
namespace Hadrons {\
using Grid::operator<<;\
using Grid::operator>>;
#define END_HADRONS_NAMESPACE }}
#define BEGIN_MODULE_NAMESPACE(name)\
namespace name {\
using Grid::operator<<;\
using Grid::operator>>;
#define END_MODULE_NAMESPACE }
#ifndef FIMPL
#define FIMPL WilsonImplR
#endif
#ifndef ZFIMPL
#define ZFIMPL ZWilsonImplR
#endif
#ifndef SIMPL
#define SIMPL ScalarImplCR
#endif
#ifndef GIMPL
#define GIMPL GimplTypesR
#define GIMPL PeriodicGimplR
#endif
BEGIN_HADRONS_NAMESPACE
@ -83,17 +93,15 @@ typedef typename SImpl::Field ScalarField##suffix;\
typedef typename SImpl::Field PropagatorField##suffix;
#define SOLVER_TYPE_ALIASES(FImpl, suffix)\
typedef std::function<void(FermionField##suffix &,\
const FermionField##suffix &)> SolverFn##suffix;
typedef Solver<FImpl> Solver##suffix;
#define SINK_TYPE_ALIASES(suffix)\
typedef std::function<SlicedPropagator##suffix\
(const PropagatorField##suffix &)> SinkFn##suffix;
#define FGS_TYPE_ALIASES(FImpl, suffix)\
#define FG_TYPE_ALIASES(FImpl, suffix)\
FERM_TYPE_ALIASES(FImpl, suffix)\
GAUGE_TYPE_ALIASES(FImpl, suffix)\
SOLVER_TYPE_ALIASES(FImpl, suffix)
GAUGE_TYPE_ALIASES(FImpl, suffix)
// logger
class HadronsLogger: public Logger
@ -104,13 +112,14 @@ public:
};
#define LOG(channel) std::cout << HadronsLog##channel
#define DEBUG_VAR(var) LOG(Debug) << #var << "= " << (var) << std::endl;
#define HADRONS_DEBUG_VAR(var) LOG(Debug) << #var << "= " << (var) << std::endl;
extern HadronsLogger HadronsLogError;
extern HadronsLogger HadronsLogWarning;
extern HadronsLogger HadronsLogMessage;
extern HadronsLogger HadronsLogIterative;
extern HadronsLogger HadronsLogDebug;
extern HadronsLogger HadronsLogIRL;
void initLogger(void);
@ -180,6 +189,28 @@ typedef XmlWriter ResultWriter;
#define RESULT_FILE_NAME(name) \
name + "." + std::to_string(vm().getTrajectory()) + "." + resultFileExt
// recursive mkdir
#define MAX_PATH_LENGTH 512u
int mkdir(const std::string dirName);
std::string basename(const std::string &s);
std::string dirname(const std::string &s);
void makeFileDir(const std::string filename, GridBase *g);
// default Schur convention
#ifndef HADRONS_DEFAULT_SCHUR
#define HADRONS_DEFAULT_SCHUR DiagTwo
#endif
#define _HADRONS_SCHUR_OP_(conv) Schur##conv##Operator
#define HADRONS_SCHUR_OP(conv) _HADRONS_SCHUR_OP_(conv)
#define HADRONS_DEFAULT_SCHUR_OP HADRONS_SCHUR_OP(HADRONS_DEFAULT_SCHUR)
#define _HADRONS_SCHUR_SOLVE_(conv) SchurRedBlack##conv##Solve
#define HADRONS_SCHUR_SOLVE(conv) _HADRONS_SCHUR_SOLVE_(conv)
#define HADRONS_DEFAULT_SCHUR_SOLVE HADRONS_SCHUR_SOLVE(HADRONS_DEFAULT_SCHUR)
// stringify macro
#define _HADRONS_STR(x) #x
#define HADRONS_STR(x) _HADRONS_STR(x)
END_HADRONS_NAMESPACE
#include <Grid/Hadrons/Exceptions.hpp>

View File

@ -184,7 +184,7 @@ void Graph<T>::removeVertex(const T &value)
}
else
{
HADRON_ERROR(Range, "vertex does not exists");
HADRONS_ERROR(Range, "vertex does not exists");
}
// remove all edges containing the vertex
@ -213,7 +213,7 @@ void Graph<T>::removeEdge(const Edge &e)
}
else
{
HADRON_ERROR(Range, "edge does not exists");
HADRONS_ERROR(Range, "edge does not exists");
}
}
@ -259,7 +259,7 @@ void Graph<T>::mark(const T &value, const bool doMark)
}
else
{
HADRON_ERROR(Range, "vertex does not exists");
HADRONS_ERROR(Range, "vertex does not exists");
}
}
@ -297,7 +297,7 @@ bool Graph<T>::isMarked(const T &value) const
}
else
{
HADRON_ERROR(Range, "vertex does not exists");
HADRONS_ERROR(Range, "vertex does not exists");
return false;
}
@ -543,7 +543,7 @@ std::vector<T> Graph<T>::topoSort(void)
{
if (tmpMarked.at(v))
{
HADRON_ERROR(Range, "cannot topologically sort a cyclic graph");
HADRONS_ERROR(Range, "cannot topologically sort a cyclic graph");
}
if (!isMarked(v))
{
@ -602,7 +602,7 @@ std::vector<T> Graph<T>::topoSort(Gen &gen)
{
if (tmpMarked.at(v))
{
HADRON_ERROR(Range, "cannot topologically sort a cyclic graph");
HADRONS_ERROR(Range, "cannot topologically sort a cyclic graph");
}
if (!isMarked(v))
{

View File

@ -56,14 +56,21 @@ int main(int argc, char *argv[])
Grid_init(&argc, &argv);
// execution
Application application(parameterFileName);
application.parseParameterFile(parameterFileName);
if (!scheduleFileName.empty())
try
{
application.loadSchedule(scheduleFileName);
Application application(parameterFileName);
application.parseParameterFile(parameterFileName);
if (!scheduleFileName.empty())
{
application.loadSchedule(scheduleFileName);
}
application.run();
}
catch (const std::exception& e)
{
Exceptions::abort(e);
}
application.run();
// epilogue
LOG(Message) << "Grid is finalizing now" << std::endl;

View File

@ -1,5 +1,5 @@
lib_LIBRARIES = libHadrons.a
bin_PROGRAMS = HadronsXmlRun HadronsXmlSchedule
bin_PROGRAMS = HadronsXmlRun
include modules.inc
@ -14,7 +14,10 @@ libHadrons_a_SOURCES = \
libHadrons_adir = $(pkgincludedir)/Hadrons
nobase_libHadrons_a_HEADERS = \
$(modules_hpp) \
AllToAllVectors.hpp \
AllToAllReduction.hpp \
Application.hpp \
EigenPack.hpp \
Environment.hpp \
Exceptions.hpp \
Factory.hpp \
@ -24,10 +27,8 @@ nobase_libHadrons_a_HEADERS = \
Module.hpp \
Modules.hpp \
ModuleFactory.hpp \
Solver.hpp \
VirtualMachine.hpp
HadronsXmlRun_SOURCES = HadronsXmlRun.cc
HadronsXmlRun_LDADD = libHadrons.a -lGrid
HadronsXmlSchedule_SOURCES = HadronsXmlSchedule.cc
HadronsXmlSchedule_LDADD = libHadrons.a -lGrid

View File

@ -49,7 +49,7 @@ std::string ModuleBase::getName(void) const
// get factory registration name if available
std::string ModuleBase::getRegisteredName(void)
{
HADRON_ERROR(Definition, "module '" + getName() + "' has no registered type"
HADRONS_ERROR(Definition, "module '" + getName() + "' has no registered type"
+ " in the factory");
}

View File

@ -35,32 +35,7 @@ See the full license in the file "LICENSE" in the top level distribution directo
BEGIN_HADRONS_NAMESPACE
// module registration macros
#define MODULE_REGISTER(mod, base)\
class mod: public base\
{\
public:\
typedef base Base;\
using Base::Base;\
virtual std::string getRegisteredName(void)\
{\
return std::string(#mod);\
}\
};\
class mod##ModuleRegistrar\
{\
public:\
mod##ModuleRegistrar(void)\
{\
ModuleFactory &modFac = ModuleFactory::getInstance();\
modFac.registerBuilder(#mod, [&](const std::string name)\
{\
return std::unique_ptr<mod>(new mod(name));\
});\
}\
};\
static mod##ModuleRegistrar mod##ModuleRegistrarInstance;
#define MODULE_REGISTER_NS(mod, base, ns)\
#define MODULE_REGISTER(mod, base, ns)\
class mod: public base\
{\
public:\
@ -85,12 +60,19 @@ public:\
};\
static ns##mod##ModuleRegistrar ns##mod##ModuleRegistrarInstance;
#define MODULE_REGISTER_TMP(mod, base, ns)\
extern template class base;\
MODULE_REGISTER(mod, ARG(base), ns);
#define ARG(...) __VA_ARGS__
#define MACRO_REDIRECT(arg1, arg2, arg3, macro, ...) macro
#define envGet(type, name)\
*env().template getObject<type>(name)
#define envGetDerived(base, type, name)\
*env().template getDerivedObject<base, type>(name)
#define envGetTmp(type, var)\
type &var = *env().template getObject<type>(getName() + "_tmp_" + #var)
@ -137,6 +119,16 @@ envTmp(type, name, Ls, env().getGrid(Ls))
#define envTmpLat(...)\
MACRO_REDIRECT(__VA_ARGS__, envTmpLat5, envTmpLat4)(__VA_ARGS__)
#define saveResult(ioStem, name, result)\
if (env().getGrid()->IsBoss() and !ioStem.empty())\
{\
makeFileDir(ioStem, env().getGrid());\
{\
ResultWriter _writer(RESULT_FILE_NAME(ioStem));\
write(_writer, name, result);\
}\
}
/******************************************************************************
* Module class *
******************************************************************************/
@ -162,6 +154,8 @@ public:
// parse parameters
virtual void parseParameters(XmlReader &reader, const std::string name) = 0;
virtual void saveParameters(XmlWriter &writer, const std::string name) = 0;
// parameter string
virtual std::string parString(void) const = 0;
// setup
virtual void setup(void) {};
virtual void execute(void) = 0;
@ -190,9 +184,11 @@ public:
// parse parameters
virtual void parseParameters(XmlReader &reader, const std::string name);
virtual void saveParameters(XmlWriter &writer, const std::string name);
// parameter string
virtual std::string parString(void) const;
// parameter access
const P & par(void) const;
void setPar(const P &par);
const P & par(void) const;
void setPar(const P &par);
private:
P par_;
};
@ -215,6 +211,8 @@ public:
push(writer, "options");
pop(writer);
};
// parameter string (empty)
virtual std::string parString(void) const {return "";};
};
/******************************************************************************
@ -237,6 +235,16 @@ void Module<P>::saveParameters(XmlWriter &writer, const std::string name)
write(writer, name, par_);
}
template <typename P>
std::string Module<P>::parString(void) const
{
XmlWriter writer("", "");
write(writer, par_.SerialisableClassName(), par_);
return writer.string();
}
template <typename P>
const P & Module<P>::par(void) const
{

View File

@ -1,64 +1,62 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules.hpp
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
Author: Lanny91 <andrew.lawson@gmail.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MContraction/Baryon.hpp>
#include <Grid/Hadrons/Modules/MContraction/Meson.hpp>
#include <Grid/Hadrons/Modules/MContraction/WeakHamiltonian.hpp>
#include <Grid/Hadrons/Modules/MContraction/WeakHamiltonianNonEye.hpp>
#include <Grid/Hadrons/Modules/MContraction/DiscLoop.hpp>
#include <Grid/Hadrons/Modules/MContraction/WeakNeutral4ptDisc.hpp>
#include <Grid/Hadrons/Modules/MContraction/Gamma3pt.hpp>
#include <Grid/Hadrons/Modules/MContraction/WardIdentity.hpp>
#include <Grid/Hadrons/Modules/MContraction/WeakHamiltonianEye.hpp>
#include <Grid/Hadrons/Modules/MFermion/GaugeProp.hpp>
#include <Grid/Hadrons/Modules/MSource/SeqGamma.hpp>
#include <Grid/Hadrons/Modules/MSource/Point.hpp>
#include <Grid/Hadrons/Modules/MSource/Wall.hpp>
#include <Grid/Hadrons/Modules/MSource/Z2.hpp>
#include <Grid/Hadrons/Modules/MSource/SeqConserved.hpp>
#include <Grid/Hadrons/Modules/MSink/Smear.hpp>
#include <Grid/Hadrons/Modules/MSink/Point.hpp>
#include <Grid/Hadrons/Modules/MSolver/RBPrecCG.hpp>
#include <Grid/Hadrons/Modules/MGauge/Unit.hpp>
#include <Grid/Hadrons/Modules/MGauge/Random.hpp>
#include <Grid/Hadrons/Modules/MGauge/StochEm.hpp>
#include <Grid/Hadrons/Modules/MGauge/FundtoHirep.hpp>
#include <Grid/Hadrons/Modules/MUtilities/TestSeqGamma.hpp>
#include <Grid/Hadrons/Modules/MUtilities/TestSeqConserved.hpp>
#include <Grid/Hadrons/Modules/MLoop/NoiseLoop.hpp>
#include <Grid/Hadrons/Modules/MScalar/FreeProp.hpp>
#include <Grid/Hadrons/Modules/MScalar/Scalar.hpp>
#include <Grid/Hadrons/Modules/MScalar/ChargedProp.hpp>
#include <Grid/Hadrons/Modules/MAction/DWF.hpp>
#include <Grid/Hadrons/Modules/MAction/Wilson.hpp>
#include <Grid/Hadrons/Modules/MAction/WilsonClover.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/TrKinetic.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/TimeMomProbe.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/StochFreeField.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/TwoPointNPR.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/Grad.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/TransProj.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/Div.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/TrMag.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/ShiftProbe.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/Utils.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/EMT.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/TwoPoint.hpp>
#include <Grid/Hadrons/Modules/MScalarSUN/TrPhi.hpp>
#include <Grid/Hadrons/Modules/MIO/LoadNersc.hpp>
#include <Grid/Hadrons/Modules/MScalar/FreeProp.hpp>
#include <Grid/Hadrons/Modules/MScalar/Scalar.hpp>
#include <Grid/Hadrons/Modules/MScalar/ScalarVP.hpp>
#include <Grid/Hadrons/Modules/MScalar/ChargedProp.hpp>
#include <Grid/Hadrons/Modules/MScalar/VPCounterTerms.hpp>
#include <Grid/Hadrons/Modules/MLoop/NoiseLoop.hpp>
#include <Grid/Hadrons/Modules/MIO/LoadEigenPack.hpp>
#include <Grid/Hadrons/Modules/MIO/LoadCoarseEigenPack.hpp>
#include <Grid/Hadrons/Modules/MIO/LoadBinary.hpp>
#include <Grid/Hadrons/Modules/MIO/LoadNersc.hpp>
#include <Grid/Hadrons/Modules/MSink/Smear.hpp>
#include <Grid/Hadrons/Modules/MSink/Point.hpp>
#include <Grid/Hadrons/Modules/MFermion/FreeProp.hpp>
#include <Grid/Hadrons/Modules/MFermion/GaugeProp.hpp>
#include <Grid/Hadrons/Modules/MGauge/FundtoHirep.hpp>
#include <Grid/Hadrons/Modules/MGauge/Random.hpp>
#include <Grid/Hadrons/Modules/MGauge/StoutSmearing.hpp>
#include <Grid/Hadrons/Modules/MGauge/Unit.hpp>
#include <Grid/Hadrons/Modules/MGauge/StochEm.hpp>
#include <Grid/Hadrons/Modules/MGauge/UnitEm.hpp>
#include <Grid/Hadrons/Modules/MUtilities/TestSeqGamma.hpp>
#include <Grid/Hadrons/Modules/MUtilities/TestSeqConserved.hpp>
#include <Grid/Hadrons/Modules/MSource/SeqConserved.hpp>
#include <Grid/Hadrons/Modules/MSource/Z2.hpp>
#include <Grid/Hadrons/Modules/MSource/Wall.hpp>
#include <Grid/Hadrons/Modules/MSource/SeqGamma.hpp>
#include <Grid/Hadrons/Modules/MSource/Point.hpp>
#include <Grid/Hadrons/Modules/MContraction/MesonFieldGamma.hpp>
#include <Grid/Hadrons/Modules/MContraction/WeakHamiltonianEye.hpp>
#include <Grid/Hadrons/Modules/MContraction/Baryon.hpp>
#include <Grid/Hadrons/Modules/MContraction/A2APionField.hpp>
#include <Grid/Hadrons/Modules/MContraction/Meson.hpp>
#include <Grid/Hadrons/Modules/MContraction/WeakHamiltonian.hpp>
#include <Grid/Hadrons/Modules/MContraction/WeakNeutral4ptDisc.hpp>
#include <Grid/Hadrons/Modules/MContraction/Gamma3pt.hpp>
#include <Grid/Hadrons/Modules/MContraction/DiscLoop.hpp>
#include <Grid/Hadrons/Modules/MContraction/WeakHamiltonianNonEye.hpp>
#include <Grid/Hadrons/Modules/MContraction/A2AMeson.hpp>
#include <Grid/Hadrons/Modules/MContraction/WardIdentity.hpp>
#include <Grid/Hadrons/Modules/MContraction/A2AMesonField.hpp>
#include <Grid/Hadrons/Modules/MAction/WilsonClover.hpp>
#include <Grid/Hadrons/Modules/MAction/ScaledDWF.hpp>
#include <Grid/Hadrons/Modules/MAction/MobiusDWF.hpp>
#include <Grid/Hadrons/Modules/MAction/Wilson.hpp>
#include <Grid/Hadrons/Modules/MAction/DWF.hpp>
#include <Grid/Hadrons/Modules/MAction/ZMobiusDWF.hpp>
#include <Grid/Hadrons/Modules/MSolver/RBPrecCG.hpp>
#include <Grid/Hadrons/Modules/MSolver/LocalCoherenceLanczos.hpp>
#include <Grid/Hadrons/Modules/MSolver/A2AVectors.hpp>

View File

@ -2,7 +2,7 @@
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/HadronsXmlSchedule.cc
Source file: extras/Hadrons/Modules/MAction/DWF.cc
Copyright (C) 2015-2018
@ -25,41 +25,11 @@ with this program; if not, write to the Free Software Foundation, Inc.,
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Application.hpp>
#include <Grid/Hadrons/Modules/MAction/DWF.hpp>
using namespace Grid;
using namespace QCD;
using namespace Hadrons;
using namespace MAction;
template class Grid::Hadrons::MAction::TDWF<FIMPL>;
int main(int argc, char *argv[])
{
// parse command line
std::string parameterFileName, scheduleFileName;
if (argc < 3)
{
std::cerr << "usage: " << argv[0] << " <parameter file> <schedule output> [Grid options]";
std::cerr << std::endl;
std::exit(EXIT_FAILURE);
}
parameterFileName = argv[1];
scheduleFileName = argv[2];
// initialization
Grid_init(&argc, &argv);
// execution
Application application;
application.parseParameterFile(parameterFileName);
application.schedule();
application.printSchedule();
application.saveSchedule(scheduleFileName);
// epilogue
LOG(Message) << "Grid is finalizing now" << std::endl;
Grid_finalize();
return EXIT_SUCCESS;
}

View File

@ -56,12 +56,12 @@ template <typename FImpl>
class TDWF: public Module<DWFPar>
{
public:
FGS_TYPE_ALIASES(FImpl,);
FG_TYPE_ALIASES(FImpl,);
public:
// constructor
TDWF(const std::string name);
// destructor
virtual ~TDWF(void) = default;
virtual ~TDWF(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -72,7 +72,8 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(DWF, TDWF<FIMPL>, MAction);
extern template class TDWF<FIMPL>;
MODULE_REGISTER_TMP(DWF, TDWF<FIMPL>, MAction);
/******************************************************************************
* DWF template implementation *

View File

@ -0,0 +1,7 @@
#include <Grid/Hadrons/Modules/MAction/MobiusDWF.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MAction;
template class Grid::Hadrons::MAction::TMobiusDWF<FIMPL>;

View File

@ -0,0 +1,109 @@
#ifndef Hadrons_MAction_MobiusDWF_hpp_
#define Hadrons_MAction_MobiusDWF_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* Mobius domain-wall fermion action *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MAction)
class MobiusDWFPar: Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(MobiusDWFPar,
std::string , gauge,
unsigned int, Ls,
double , mass,
double , M5,
double , b,
double , c,
std::string , boundary);
};
template <typename FImpl>
class TMobiusDWF: public Module<MobiusDWFPar>
{
public:
FG_TYPE_ALIASES(FImpl,);
public:
// constructor
TMobiusDWF(const std::string name);
// destructor
virtual ~TMobiusDWF(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER_TMP(MobiusDWF, TMobiusDWF<FIMPL>, MAction);
/******************************************************************************
* TMobiusDWF implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename FImpl>
TMobiusDWF<FImpl>::TMobiusDWF(const std::string name)
: Module<MobiusDWFPar>(name)
{}
// dependencies/products ///////////////////////////////////////////////////////
template <typename FImpl>
std::vector<std::string> TMobiusDWF<FImpl>::getInput(void)
{
std::vector<std::string> in = {par().gauge};
return in;
}
template <typename FImpl>
std::vector<std::string> TMobiusDWF<FImpl>::getOutput(void)
{
std::vector<std::string> out = {getName()};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename FImpl>
void TMobiusDWF<FImpl>::setup(void)
{
LOG(Message) << "Setting up Mobius domain wall fermion matrix with m= "
<< par().mass << ", M5= " << par().M5 << ", Ls= " << par().Ls
<< ", b= " << par().b << ", c= " << par().c
<< " using gauge field '" << par().gauge << "'"
<< std::endl;
LOG(Message) << "Fermion boundary conditions: " << par().boundary
<< std::endl;
env().createGrid(par().Ls);
auto &U = envGet(LatticeGaugeField, par().gauge);
auto &g4 = *env().getGrid();
auto &grb4 = *env().getRbGrid();
auto &g5 = *env().getGrid(par().Ls);
auto &grb5 = *env().getRbGrid(par().Ls);
std::vector<Complex> boundary = strToVec<Complex>(par().boundary);
typename MobiusFermion<FImpl>::ImplParams implParams(boundary);
envCreateDerived(FMat, MobiusFermion<FImpl>, getName(), par().Ls, U, g5,
grb5, g4, grb4, par().mass, par().M5, par().b, par().c,
implParams);
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TMobiusDWF<FImpl>::execute(void)
{}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MAction_MobiusDWF_hpp_

View File

@ -0,0 +1,7 @@
#include <Grid/Hadrons/Modules/MAction/ScaledDWF.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MAction;
template class Grid::Hadrons::MAction::TScaledDWF<FIMPL>;

View File

@ -0,0 +1,108 @@
#ifndef Hadrons_MAction_ScaledDWF_hpp_
#define Hadrons_MAction_ScaledDWF_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* Scaled domain wall fermion *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MAction)
class ScaledDWFPar: Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(ScaledDWFPar,
std::string , gauge,
unsigned int, Ls,
double , mass,
double , M5,
double , scale,
std::string , boundary);
};
template <typename FImpl>
class TScaledDWF: public Module<ScaledDWFPar>
{
public:
FG_TYPE_ALIASES(FImpl,);
public:
// constructor
TScaledDWF(const std::string name);
// destructor
virtual ~TScaledDWF(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER_TMP(ScaledDWF, TScaledDWF<FIMPL>, MAction);
/******************************************************************************
* TScaledDWF implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename FImpl>
TScaledDWF<FImpl>::TScaledDWF(const std::string name)
: Module<ScaledDWFPar>(name)
{}
// dependencies/products ///////////////////////////////////////////////////////
template <typename FImpl>
std::vector<std::string> TScaledDWF<FImpl>::getInput(void)
{
std::vector<std::string> in = {par().gauge};
return in;
}
template <typename FImpl>
std::vector<std::string> TScaledDWF<FImpl>::getOutput(void)
{
std::vector<std::string> out = {getName()};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename FImpl>
void TScaledDWF<FImpl>::setup(void)
{
LOG(Message) << "Setting up scaled domain wall fermion matrix with m= "
<< par().mass << ", M5= " << par().M5 << ", Ls= " << par().Ls
<< ", scale= " << par().scale
<< " using gauge field '" << par().gauge << "'"
<< std::endl;
LOG(Message) << "Fermion boundary conditions: " << par().boundary
<< std::endl;
env().createGrid(par().Ls);
auto &U = envGet(LatticeGaugeField, par().gauge);
auto &g4 = *env().getGrid();
auto &grb4 = *env().getRbGrid();
auto &g5 = *env().getGrid(par().Ls);
auto &grb5 = *env().getRbGrid(par().Ls);
std::vector<Complex> boundary = strToVec<Complex>(par().boundary);
typename MobiusFermion<FImpl>::ImplParams implParams(boundary);
envCreateDerived(FMat, ScaledShamirFermion<FImpl>, getName(), par().Ls, U, g5,
grb5, g4, grb4, par().mass, par().M5, par().scale,
implParams);
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TScaledDWF<FImpl>::execute(void)
{}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MAction_ScaledDWF_hpp_

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MAction/Wilson.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MAction/Wilson.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MAction;
template class Grid::Hadrons::MAction::TWilson<FIMPL>;

View File

@ -54,12 +54,12 @@ template <typename FImpl>
class TWilson: public Module<WilsonPar>
{
public:
FGS_TYPE_ALIASES(FImpl,);
FG_TYPE_ALIASES(FImpl,);
public:
// constructor
TWilson(const std::string name);
// destructor
virtual ~TWilson(void) = default;
virtual ~TWilson(void) {};
// dependencies/products
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -70,7 +70,7 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(Wilson, TWilson<FIMPL>, MAction);
MODULE_REGISTER_TMP(Wilson, TWilson<FIMPL>, MAction);
/******************************************************************************
* TWilson template implementation *
@ -102,7 +102,7 @@ std::vector<std::string> TWilson<FImpl>::getOutput(void)
template <typename FImpl>
void TWilson<FImpl>::setup(void)
{
LOG(Message) << "Setting up TWilson fermion matrix with m= " << par().mass
LOG(Message) << "Setting up Wilson fermion matrix with m= " << par().mass
<< " using gauge field '" << par().gauge << "'" << std::endl;
LOG(Message) << "Fermion boundary conditions: " << par().boundary
<< std::endl;

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MAction/WilsonClover.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MAction/WilsonClover.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MAction;
template class Grid::Hadrons::MAction::TWilsonClover<FIMPL>;

View File

@ -2,12 +2,13 @@
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MAction/Wilson.hpp
Source file: extras/Hadrons/Modules/MAction/WilsonClover.hpp
Copyright (C) 2015
Copyright (C) 2016
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
Author: Guido Cossu <guido.cossu@ed.ac.uk>
Author: pretidav <david.preti@csic.es>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@ -37,7 +38,7 @@ See the full license in the file "LICENSE" in the top level distribution directo
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* TWilson quark action *
* Wilson clover quark action *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MAction)
@ -58,12 +59,12 @@ template <typename FImpl>
class TWilsonClover: public Module<WilsonCloverPar>
{
public:
FGS_TYPE_ALIASES(FImpl,);
FG_TYPE_ALIASES(FImpl,);
public:
// constructor
TWilsonClover(const std::string name);
// destructor
virtual ~TWilsonClover(void) = default;
virtual ~TWilsonClover(void) {};
// dependencies/products
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -73,7 +74,7 @@ public:
virtual void execute(void);
};
MODULE_REGISTER_NS(WilsonClover, TWilsonClover<FIMPL>, MAction);
MODULE_REGISTER_TMP(WilsonClover, TWilsonClover<FIMPL>, MAction);
/******************************************************************************
* TWilsonClover template implementation *
@ -105,13 +106,7 @@ std::vector<std::string> TWilsonClover<FImpl>::getOutput(void)
template <typename FImpl>
void TWilsonClover<FImpl>::setup(void)
{
//unsigned int size;
// size = 2*env().template lattice4dSize<typename FImpl::DoubledGaugeField>();
// env().registerObject(getName(), size);
LOG(Message) << "Setting up TWilsonClover fermion matrix with m= " << par().mass
LOG(Message) << "Setting up Wilson clover fermion matrix with m= " << par().mass
<< " using gauge field '" << par().gauge << "'" << std::endl;
LOG(Message) << "Fermion boundary conditions: " << par().boundary
<< std::endl;
@ -128,23 +123,12 @@ void TWilsonClover<FImpl>::setup(void)
par().csw_t,
par().clover_anisotropy,
implParams);
//FMat *fMatPt = new WilsonCloverFermion<FImpl>(U, grid, gridRb, par().mass,
// par().csw_r,
// par().csw_t,
// par().clover_anisotropy,
// implParams);
//env().setObject(getName(), fMatPt);
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TWilsonClover<FImpl>::execute()
{
}
{}
END_MODULE_NAMESPACE

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MAction/ZMobiusDWF.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MAction/ZMobiusDWF.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MAction;
template class Grid::Hadrons::MAction::TZMobiusDWF<ZFIMPL>;

View File

@ -0,0 +1,143 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MAction/ZMobiusDWF.hpp
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#ifndef Hadrons_MAction_ZMobiusDWF_hpp_
#define Hadrons_MAction_ZMobiusDWF_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* z-Mobius domain-wall fermion action *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MAction)
class ZMobiusDWFPar: Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(ZMobiusDWFPar,
std::string , gauge,
unsigned int , Ls,
double , mass,
double , M5,
double , b,
double , c,
std::vector<std::complex<double>>, omega,
std::string , boundary);
};
template <typename FImpl>
class TZMobiusDWF: public Module<ZMobiusDWFPar>
{
public:
FG_TYPE_ALIASES(FImpl,);
public:
// constructor
TZMobiusDWF(const std::string name);
// destructor
virtual ~TZMobiusDWF(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER_TMP(ZMobiusDWF, TZMobiusDWF<ZFIMPL>, MAction);
/******************************************************************************
* TZMobiusDWF implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename FImpl>
TZMobiusDWF<FImpl>::TZMobiusDWF(const std::string name)
: Module<ZMobiusDWFPar>(name)
{}
// dependencies/products ///////////////////////////////////////////////////////
template <typename FImpl>
std::vector<std::string> TZMobiusDWF<FImpl>::getInput(void)
{
std::vector<std::string> in = {par().gauge};
return in;
}
template <typename FImpl>
std::vector<std::string> TZMobiusDWF<FImpl>::getOutput(void)
{
std::vector<std::string> out = {getName()};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename FImpl>
void TZMobiusDWF<FImpl>::setup(void)
{
LOG(Message) << "Setting up z-Mobius domain wall fermion matrix with m= "
<< par().mass << ", M5= " << par().M5 << ", Ls= " << par().Ls
<< ", b= " << par().b << ", c= " << par().c
<< " using gauge field '" << par().gauge << "'"
<< std::endl;
LOG(Message) << "Omegas: " << std::endl;
for (unsigned int i = 0; i < par().omega.size(); ++i)
{
LOG(Message) << " omega[" << i << "]= " << par().omega[i] << std::endl;
}
LOG(Message) << "Fermion boundary conditions: " << par().boundary
<< std::endl;
env().createGrid(par().Ls);
auto &U = envGet(LatticeGaugeField, par().gauge);
auto &g4 = *env().getGrid();
auto &grb4 = *env().getRbGrid();
auto &g5 = *env().getGrid(par().Ls);
auto &grb5 = *env().getRbGrid(par().Ls);
auto omega = par().omega;
std::vector<Complex> boundary = strToVec<Complex>(par().boundary);
typename ZMobiusFermion<FImpl>::ImplParams implParams(boundary);
envCreateDerived(FMat, ZMobiusFermion<FImpl>, getName(), par().Ls, U, g5,
grb5, g4, grb4, par().mass, par().M5, omega,
par().b, par().c, implParams);
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TZMobiusDWF<FImpl>::execute(void)
{}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MAction_ZMobiusDWF_hpp_

View File

@ -0,0 +1,8 @@
#include <Grid/Hadrons/Modules/MContraction/A2AMeson.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TA2AMeson<FIMPL>;
template class Grid::Hadrons::MContraction::TA2AMeson<ZFIMPL>;

View File

@ -0,0 +1,207 @@
#ifndef Hadrons_MContraction_A2AMeson_hpp_
#define Hadrons_MContraction_A2AMeson_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
#include <Grid/Hadrons/AllToAllVectors.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* A2AMeson *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MContraction)
typedef std::pair<Gamma::Algebra, Gamma::Algebra> GammaPair;
class A2AMesonPar : Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(A2AMesonPar,
int, Nl,
int, N,
std::string, A2A1,
std::string, A2A2,
std::string, gammas,
std::string, output);
};
template <typename FImpl>
class TA2AMeson : public Module<A2AMesonPar>
{
public:
FERM_TYPE_ALIASES(FImpl, );
SOLVER_TYPE_ALIASES(FImpl, );
typedef A2AModesSchurDiagTwo<typename FImpl::FermionField, FMat, Solver> A2ABase;
class Result : Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(Result,
Gamma::Algebra, gamma_snk,
Gamma::Algebra, gamma_src,
std::vector<Complex>, corr);
};
public:
// constructor
TA2AMeson(const std::string name);
// destructor
virtual ~TA2AMeson(void){};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
virtual void parseGammaString(std::vector<GammaPair> &gammaList);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER(A2AMeson, ARG(TA2AMeson<FIMPL>), MContraction);
MODULE_REGISTER(ZA2AMeson, ARG(TA2AMeson<ZFIMPL>), MContraction);
/******************************************************************************
* TA2AMeson implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename FImpl>
TA2AMeson<FImpl>::TA2AMeson(const std::string name)
: Module<A2AMesonPar>(name)
{
}
// dependencies/products ///////////////////////////////////////////////////////
template <typename FImpl>
std::vector<std::string> TA2AMeson<FImpl>::getInput(void)
{
std::vector<std::string> in = {par().A2A1 + "_class", par().A2A2 + "_class"};
in.push_back(par().A2A1 + "_w_high_4d");
in.push_back(par().A2A2 + "_v_high_4d");
return in;
}
template <typename FImpl>
std::vector<std::string> TA2AMeson<FImpl>::getOutput(void)
{
std::vector<std::string> out = {};
return out;
}
template <typename FImpl>
void TA2AMeson<FImpl>::parseGammaString(std::vector<GammaPair> &gammaList)
{
gammaList.clear();
// Parse individual contractions from input string.
gammaList = strToVec<GammaPair>(par().gammas);
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename FImpl>
void TA2AMeson<FImpl>::setup(void)
{
int nt = env().getDim(Tp);
int N = par().N;
int Ls_ = env().getObjectLs(par().A2A1 + "_class");
envTmp(std::vector<FermionField>, "w1", 1, N, FermionField(env().getGrid(1)));
envTmp(std::vector<FermionField>, "v1", 1, N, FermionField(env().getGrid(1)));
envTmpLat(FermionField, "tmpv_5d", Ls_);
envTmpLat(FermionField, "tmpw_5d", Ls_);
envTmp(std::vector<ComplexD>, "MF_x", 1, nt);
envTmp(std::vector<ComplexD>, "MF_y", 1, nt);
envTmp(std::vector<ComplexD>, "tmp", 1, nt);
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TA2AMeson<FImpl>::execute(void)
{
LOG(Message) << "Computing A2A meson contractions" << std::endl;
Result result;
Gamma g5(Gamma::Algebra::Gamma5);
std::vector<GammaPair> gammaList;
int nt = env().getDim(Tp);
parseGammaString(gammaList);
result.gamma_snk = gammaList[0].first;
result.gamma_src = gammaList[0].second;
result.corr.resize(nt);
int Nl = par().Nl;
int N = par().N;
LOG(Message) << "N for A2A cont: " << N << std::endl;
envGetTmp(std::vector<ComplexD>, MF_x);
envGetTmp(std::vector<ComplexD>, MF_y);
envGetTmp(std::vector<ComplexD>, tmp);
for (unsigned int t = 0; t < nt; ++t)
{
tmp[t] = TensorRemove(MF_x[t] * MF_y[t] * 0.0);
}
Gamma gSnk(gammaList[0].first);
Gamma gSrc(gammaList[0].second);
auto &a2a1_fn = envGet(A2ABase, par().A2A1 + "_class");
envGetTmp(std::vector<FermionField>, w1);
envGetTmp(std::vector<FermionField>, v1);
envGetTmp(FermionField, tmpv_5d);
envGetTmp(FermionField, tmpw_5d);
LOG(Message) << "Finding v and w vectors for N = " << N << std::endl;
for (int i = 0; i < N; i++)
{
a2a1_fn.return_v(i, tmpv_5d, v1[i]);
a2a1_fn.return_w(i, tmpw_5d, w1[i]);
}
LOG(Message) << "Found v and w vectors for N = " << N << std::endl;
for (unsigned int i = 0; i < N; i++)
{
v1[i] = gSnk * v1[i];
}
int ty;
for (unsigned int i = 0; i < N; i++)
{
for (unsigned int j = 0; j < N; j++)
{
mySliceInnerProductVector(MF_x, w1[i], v1[j], Tp);
mySliceInnerProductVector(MF_y, w1[j], v1[i], Tp);
for (unsigned int t = 0; t < nt; ++t)
{
for (unsigned int tx = 0; tx < nt; tx++)
{
ty = (tx + t) % nt;
tmp[t] += TensorRemove((MF_x[tx]) * (MF_y[ty]));
}
}
}
if (i % 10 == 0)
{
LOG(Message) << "MF for i = " << i << " of " << N << std::endl;
}
}
double NTinv = 1.0 / static_cast<double>(nt);
for (unsigned int t = 0; t < nt; ++t)
{
result.corr[t] = NTinv * tmp[t];
}
saveResult(par().output, "meson", result);
}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MContraction_A2AMeson_hpp_

View File

@ -0,0 +1,8 @@
#include <Grid/Hadrons/Modules/MContraction/A2AMesonField.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TA2AMesonField<FIMPL>;
template class Grid::Hadrons::MContraction::TA2AMesonField<ZFIMPL>;

View File

@ -0,0 +1,279 @@
#ifndef Hadrons_MContraction_A2AMesonField_hpp_
#define Hadrons_MContraction_A2AMesonField_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
#include <Grid/Hadrons/AllToAllVectors.hpp>
#include <Grid/Hadrons/Modules/MContraction/A2Autils.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* A2AMesonField *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MContraction)
typedef std::pair<Gamma::Algebra, Gamma::Algebra> GammaPair;
class A2AMesonFieldPar : Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(A2AMesonFieldPar,
int, cacheBlock,
int, schurBlock,
int, Nmom,
int, N,
int, Nl,
std::string, A2A,
std::string, output);
};
template <typename FImpl>
class TA2AMesonField : public Module<A2AMesonFieldPar>
{
public:
FERM_TYPE_ALIASES(FImpl, );
SOLVER_TYPE_ALIASES(FImpl, );
typedef A2AModesSchurDiagTwo<typename FImpl::FermionField, FMat, Solver> A2ABase;
public:
// constructor
TA2AMesonField(const std::string name);
// destructor
virtual ~TA2AMesonField(void){};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER(A2AMesonField, ARG(TA2AMesonField<FIMPL>), MContraction);
MODULE_REGISTER(ZA2AMesonField, ARG(TA2AMesonField<ZFIMPL>), MContraction);
/******************************************************************************
* TA2AMesonField implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename FImpl>
TA2AMesonField<FImpl>::TA2AMesonField(const std::string name)
: Module<A2AMesonFieldPar>(name)
{
}
// dependencies/products ///////////////////////////////////////////////////////
template <typename FImpl>
std::vector<std::string> TA2AMesonField<FImpl>::getInput(void)
{
std::vector<std::string> in = {par().A2A + "_class"};
in.push_back(par().A2A + "_w_high_4d");
in.push_back(par().A2A + "_v_high_4d");
return in;
}
template <typename FImpl>
std::vector<std::string> TA2AMesonField<FImpl>::getOutput(void)
{
std::vector<std::string> out = {};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename FImpl>
void TA2AMesonField<FImpl>::setup(void)
{
auto &a2a = envGet(A2ABase, par().A2A + "_class");
int nt = env().getDim(Tp);
int Nl = par().Nl;
int N = par().N;
int Ls_ = env().getObjectLs(par().A2A + "_class");
// Four D fields
envTmp(std::vector<FermionField>, "w", 1, par().schurBlock, FermionField(env().getGrid(1)));
envTmp(std::vector<FermionField>, "v", 1, par().schurBlock, FermionField(env().getGrid(1)));
// 5D tmp
envTmpLat(FermionField, "tmp_5d", Ls_);
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TA2AMesonField<FImpl>::execute(void)
{
LOG(Message) << "Computing A2A meson field" << std::endl;
auto &a2a = envGet(A2ABase, par().A2A + "_class");
// 2+6+4+4 = 16 gammas
// Ordering defined here
std::vector<Gamma::Algebra> gammas ( {
Gamma::Algebra::Gamma5,
Gamma::Algebra::Identity,
Gamma::Algebra::GammaX,
Gamma::Algebra::GammaY,
Gamma::Algebra::GammaZ,
Gamma::Algebra::GammaT,
Gamma::Algebra::GammaXGamma5,
Gamma::Algebra::GammaYGamma5,
Gamma::Algebra::GammaZGamma5,
Gamma::Algebra::GammaTGamma5,
Gamma::Algebra::SigmaXY,
Gamma::Algebra::SigmaXZ,
Gamma::Algebra::SigmaXT,
Gamma::Algebra::SigmaYZ,
Gamma::Algebra::SigmaYT,
Gamma::Algebra::SigmaZT
});
///////////////////////////////////////////////
// Square assumption for now Nl = Nr = N
///////////////////////////////////////////////
int nt = env().getDim(Tp);
int nx = env().getDim(Xp);
int ny = env().getDim(Yp);
int nz = env().getDim(Zp);
int N = par().N;
int Nl = par().Nl;
int ngamma = gammas.size();
int schurBlock = par().schurBlock;
int cacheBlock = par().cacheBlock;
int nmom = par().Nmom;
///////////////////////////////////////////////
// Momentum setup
///////////////////////////////////////////////
GridBase *grid = env().getGrid(1);
std::vector<LatticeComplex> phases(nmom,grid);
for(int m=0;m<nmom;m++){
phases[m] = Complex(1.0); // All zero momentum for now
}
Eigen::Tensor<ComplexD,5> mesonField (nmom,ngamma,nt,N,N);
LOG(Message) << "N = Nh+Nl for A2A MesonField is " << N << std::endl;
envGetTmp(std::vector<FermionField>, w);
envGetTmp(std::vector<FermionField>, v);
envGetTmp(FermionField, tmp_5d);
LOG(Message) << "Finding v and w vectors for N = " << N << std::endl;
//////////////////////////////////////////////////////////////////////////
// i,j is first loop over SchurBlock factors reusing 5D matrices
// ii,jj is second loop over cacheBlock factors for high perf contractoin
// iii,jjj are loops within cacheBlock
// Total index is sum of these i+ii+iii etc...
//////////////////////////////////////////////////////////////////////////
double flops = 0.0;
double bytes = 0.0;
double vol = nx*ny*nz*nt;
double t_schur=0;
double t_contr=0;
double t_int_0=0;
double t_int_1=0;
double t_int_2=0;
double t_int_3=0;
double t0 = usecond();
int N_i = N;
int N_j = N;
for(int i=0;i<N_i;i+=schurBlock){ //loop over SchurBlocking to suppress 5D matrix overhead
for(int j=0;j<N_j;j+=schurBlock){
///////////////////////////////////////////////////////////////
// Get the W and V vectors for this schurBlock^2 set of terms
///////////////////////////////////////////////////////////////
int N_ii = MIN(N_i-i,schurBlock);
int N_jj = MIN(N_j-j,schurBlock);
t_schur-=usecond();
for(int ii =0;ii < N_ii;ii++) a2a.return_w(i+ii, tmp_5d, w[ii]);
for(int jj =0;jj < N_jj;jj++) a2a.return_v(j+jj, tmp_5d, v[jj]);
t_schur+=usecond();
LOG(Message) << "Found w vectors " << i <<" .. " << i+N_ii-1 << std::endl;
LOG(Message) << "Found v vectors " << j <<" .. " << j+N_jj-1 << std::endl;
///////////////////////////////////////////////////////////////
// Series of cache blocked chunks of the contractions within this SchurBlock
///////////////////////////////////////////////////////////////
for(int ii=0;ii<N_ii;ii+=cacheBlock){
for(int jj=0;jj<N_jj;jj+=cacheBlock){
int N_iii = MIN(N_ii-ii,cacheBlock);
int N_jjj = MIN(N_jj-jj,cacheBlock);
Eigen::Tensor<ComplexD,5> mesonFieldBlocked(nmom,ngamma,nt,N_iii,N_jjj);
t_contr-=usecond();
A2Autils<FImpl>::MesonField(mesonFieldBlocked,
&w[ii],
&v[jj], gammas, phases,Tp);
t_contr+=usecond();
flops += vol * ( 2 * 8.0 + 6.0 + 8.0*nmom) * N_iii*N_jjj*ngamma;
bytes += vol * (12.0 * sizeof(Complex) ) * N_iii*N_jjj
+ vol * ( 2.0 * sizeof(Complex) *nmom ) * N_iii*N_jjj* ngamma;
///////////////////////////////////////////////////////////////
// Copy back to full meson field tensor
///////////////////////////////////////////////////////////////
parallel_for_nest2(int iii=0;iii< N_iii;iii++) {
for(int jjj=0;jjj< N_jjj;jjj++) {
for(int m =0;m< nmom;m++) {
for(int g =0;g< ngamma;g++) {
for(int t =0;t< nt;t++) {
mesonField(m,g,t,i+ii+iii,j+jj+jjj) = mesonFieldBlocked(m,g,t,iii,jjj);
}}}
}}
}}
}}
double nodes=grid->NodeCount();
double t1 = usecond();
LOG(Message) << " Contraction of MesonFields took "<<(t1-t0)/1.0e6<< " seconds " << std::endl;
LOG(Message) << " Schur "<<(t_schur)/1.0e6<< " seconds " << std::endl;
LOG(Message) << " Contr "<<(t_contr)/1.0e6<< " seconds " << std::endl;
/////////////////////////////////////////////////////////////////////////
// Test: Build the pion correlator (two end)
// < PI_ij(t0) PI_ji (t0+t) >
/////////////////////////////////////////////////////////////////////////
std::vector<ComplexD> corr(nt,ComplexD(0.0));
for(int i=0;i<N;i++){
for(int j=0;j<N;j++){
int m=0; // first momentum
int g=0; // first gamma in above ordering is gamma5 for pion
for(int t0=0;t0<nt;t0++){
for(int t=0;t<nt;t++){
int tt = (t0+t)%nt;
corr[t] += mesonField(m,g,t0,i,j)* mesonField(m,g,tt,j,i);
}}
}}
for(int t=0;t<nt;t++) corr[t] = corr[t]/ (double)nt;
for(int t=0;t<nt;t++) LOG(Message) << " " << t << " " << corr[t]<<std::endl;
// saveResult(par().output, "meson", result);
}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MContraction_A2AMesonField_hpp_

View File

@ -0,0 +1,8 @@
#include <Grid/Hadrons/Modules/MContraction/A2APionField.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TA2APionField<FIMPL>;
template class Grid::Hadrons::MContraction::TA2APionField<ZFIMPL>;

View File

@ -0,0 +1,502 @@
#ifndef Hadrons_MContraction_A2APionField_hpp_
#define Hadrons_MContraction_A2APionField_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
#include <Grid/Hadrons/AllToAllVectors.hpp>
#include <Grid/Hadrons/Modules/MContraction/A2Autils.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* A2APionField *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MContraction)
typedef std::pair<Gamma::Algebra, Gamma::Algebra> GammaPair;
class A2APionFieldPar : Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(A2APionFieldPar,
int, cacheBlock,
int, schurBlock,
int, Nmom,
std::string, A2A_i,
std::string, A2A_j,
std::string, output);
};
template <typename FImpl>
class TA2APionField : public Module<A2APionFieldPar>
{
public:
FERM_TYPE_ALIASES(FImpl, );
SOLVER_TYPE_ALIASES(FImpl, );
typedef typename FImpl::SiteSpinor vobj;
typedef typename vobj::scalar_object sobj;
typedef typename vobj::scalar_type scalar_type;
typedef typename vobj::vector_type vector_type;
typedef iSpinMatrix<vector_type> SpinMatrix_v;
typedef iSpinMatrix<scalar_type> SpinMatrix_s;
typedef iSinglet<vector_type> Scalar_v;
typedef iSinglet<scalar_type> Scalar_s;
typedef A2AModesSchurDiagTwo<typename FImpl::FermionField, FMat, Solver> A2ABase;
public:
// constructor
TA2APionField(const std::string name);
// destructor
virtual ~TA2APionField(void){};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER(A2APionField, ARG(TA2APionField<FIMPL>), MContraction);
MODULE_REGISTER(ZA2APionField, ARG(TA2APionField<ZFIMPL>), MContraction);
/******************************************************************************
* TA2APionField implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename FImpl>
TA2APionField<FImpl>::TA2APionField(const std::string name)
: Module<A2APionFieldPar>(name)
{
}
// dependencies/products ///////////////////////////////////////////////////////
template <typename FImpl>
std::vector<std::string> TA2APionField<FImpl>::getInput(void)
{
std::vector<std::string> in;
in.push_back(par().A2A_i + "_class");
in.push_back(par().A2A_i + "_w_high_4d");
in.push_back(par().A2A_i + "_v_high_4d");
in.push_back(par().A2A_j + "_class");
in.push_back(par().A2A_j + "_w_high_4d");
in.push_back(par().A2A_j + "_v_high_4d");
return in;
}
template <typename FImpl>
std::vector<std::string> TA2APionField<FImpl>::getOutput(void)
{
std::vector<std::string> out = {};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename FImpl>
void TA2APionField<FImpl>::setup(void)
{
// Four D fields
envTmp(std::vector<FermionField>, "wi", 1, par().schurBlock, FermionField(env().getGrid(1)));
envTmp(std::vector<FermionField>, "vi", 1, par().schurBlock, FermionField(env().getGrid(1)));
envTmp(std::vector<FermionField>, "wj", 1, par().schurBlock, FermionField(env().getGrid(1)));
envTmp(std::vector<FermionField>, "vj", 1, par().schurBlock, FermionField(env().getGrid(1)));
// 5D tmp
int Ls_i = env().getObjectLs(par().A2A_i + "_class");
envTmpLat(FermionField, "tmp_5d", Ls_i);
int Ls_j= env().getObjectLs(par().A2A_j + "_class");
assert ( Ls_i == Ls_j );
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TA2APionField<FImpl>::execute(void)
{
LOG(Message) << "Computing A2A Pion fields" << std::endl;
auto &a2a_i = envGet(A2ABase, par().A2A_i + "_class");
auto &a2a_j = envGet(A2ABase, par().A2A_j + "_class");
///////////////////////////////////////////////
// Square assumption for now Nl = Nr = N
///////////////////////////////////////////////
int nt = env().getDim(Tp);
int nx = env().getDim(Xp);
int ny = env().getDim(Yp);
int nz = env().getDim(Zp);
// int N_i = a2a_i.par().N;
// int N_j = a2a_j.par().N;
int N_i = a2a_i.getN();
int N_j = a2a_j.getN();
int nmom=par().Nmom;
int schurBlock = par().schurBlock;
int cacheBlock = par().cacheBlock;
///////////////////////////////////////////////
// Momentum setup
///////////////////////////////////////////////
GridBase *grid = env().getGrid(1);
std::vector<LatticeComplex> phases(nmom,grid);
for(int m=0;m<nmom;m++){
phases[m] = Complex(1.0); // All zero momentum for now
}
///////////////////////////////////////////////////////////////////////
// i and j represent different flavours, hits, with different ranks.
// in general non-square case.
///////////////////////////////////////////////////////////////////////
Eigen::Tensor<ComplexD,4> pionFieldWVmom_ij (nmom,nt,N_i,N_j);
Eigen::Tensor<ComplexD,3> pionFieldWV_ij (nt,N_i,N_j);
Eigen::Tensor<ComplexD,4> pionFieldWVmom_ji (nmom,nt,N_j,N_i);
Eigen::Tensor<ComplexD,3> pionFieldWV_ji (nt,N_j,N_i);
LOG(Message) << "Rank for A2A PionField is " << N_i << " x "<<N_j << std::endl;
envGetTmp(std::vector<FermionField>, wi);
envGetTmp(std::vector<FermionField>, vi);
envGetTmp(std::vector<FermionField>, wj);
envGetTmp(std::vector<FermionField>, vj);
envGetTmp(FermionField, tmp_5d);
LOG(Message) << "Finding v and w vectors " << std::endl;
//////////////////////////////////////////////////////////////////////////
// i,j is first loop over SchurBlock factors reusing 5D matrices
// ii,jj is second loop over cacheBlock factors for high perf contractoin
// iii,jjj are loops within cacheBlock
// Total index is sum of these i+ii+iii etc...
//////////////////////////////////////////////////////////////////////////
double flops = 0.0;
double bytes = 0.0;
double vol = nx*ny*nz*nt;
double vol3 = nx*ny*nz;
double t_schur=0;
double t_contr_vwm=0;
double t_contr_vw=0;
double t_contr_ww=0;
double t_contr_vv=0;
double tt0 = usecond();
for(int i=0;i<N_i;i+=schurBlock){ //loop over SchurBlocking to suppress 5D matrix overhead
for(int j=0;j<N_j;j+=schurBlock){
///////////////////////////////////////////////////////////////
// Get the W and V vectors for this schurBlock^2 set of terms
///////////////////////////////////////////////////////////////
int N_ii = MIN(N_i-i,schurBlock);
int N_jj = MIN(N_j-j,schurBlock);
t_schur-=usecond();
for(int ii =0;ii < N_ii;ii++) a2a_i.return_w(i+ii, tmp_5d, wi[ii]);
for(int jj =0;jj < N_jj;jj++) a2a_j.return_w(j+jj, tmp_5d, wj[jj]);
for(int ii =0;ii < N_ii;ii++) a2a_i.return_v(i+ii, tmp_5d, vi[ii]);
for(int jj =0;jj < N_jj;jj++) a2a_j.return_v(j+jj, tmp_5d, vj[jj]);
t_schur+=usecond();
LOG(Message) << "Found i w&v vectors " << i <<" .. " << i+N_ii-1 << std::endl;
LOG(Message) << "Found j w&v vectors " << j <<" .. " << j+N_jj-1 << std::endl;
///////////////////////////////////////////////////////////////
// Series of cache blocked chunks of the contractions within this SchurBlock
///////////////////////////////////////////////////////////////
for(int ii=0;ii<N_ii;ii+=cacheBlock){
for(int jj=0;jj<N_jj;jj+=cacheBlock){
int N_iii = MIN(N_ii-ii,cacheBlock);
int N_jjj = MIN(N_jj-jj,cacheBlock);
Eigen::Tensor<ComplexD,4> pionFieldWVmomB_ij(nmom,nt,N_iii,N_jjj);
Eigen::Tensor<ComplexD,4> pionFieldWVmomB_ji(nmom,nt,N_jjj,N_iii);
Eigen::Tensor<ComplexD,3> pionFieldWVB_ij(nt,N_iii,N_jjj);
Eigen::Tensor<ComplexD,3> pionFieldWVB_ji(nt,N_jjj,N_iii);
t_contr_vwm-=usecond();
A2Autils<FImpl>::PionFieldWVmom(pionFieldWVmomB_ij, &wi[ii], &vj[jj], phases,Tp);
A2Autils<FImpl>::PionFieldWVmom(pionFieldWVmomB_ji, &wj[jj], &vi[ii], phases,Tp);
t_contr_vwm+=usecond();
t_contr_vw-=usecond();
A2Autils<FImpl>::PionFieldWV(pionFieldWVB_ij, &wi[ii], &vj[jj],Tp);
A2Autils<FImpl>::PionFieldWV(pionFieldWVB_ji, &wj[jj], &vi[ii],Tp);
t_contr_vw+=usecond();
flops += vol * ( 2 * 8.0 + 6.0 + 8.0*nmom) * N_iii*N_jjj;
bytes += vol * (12.0 * sizeof(Complex) ) * N_iii*N_jjj
+ vol * ( 2.0 * sizeof(Complex) *nmom ) * N_iii*N_jjj;
///////////////////////////////////////////////////////////////
// Copy back to full meson field tensor
///////////////////////////////////////////////////////////////
parallel_for_nest2(int iii=0;iii< N_iii;iii++) {
for(int jjj=0;jjj< N_jjj;jjj++) {
for(int m =0;m< nmom;m++) {
for(int t =0;t< nt;t++) {
pionFieldWVmom_ij(m,t,i+ii+iii,j+jj+jjj) = pionFieldWVmomB_ij(m,t,iii,jjj);
pionFieldWVmom_ji(m,t,j+jj+jjj,i+ii+iii) = pionFieldWVmomB_ji(m,t,jjj,iii);
}}
for(int t =0;t< nt;t++) {
pionFieldWV_ij(t,i+ii+iii,j+jj+jjj) = pionFieldWVB_ij(t,iii,jjj);
pionFieldWV_ji(t,j+jj+jjj,i+ii+iii) = pionFieldWVB_ji(t,jjj,iii);
}
}}
}}
}}
double nodes=grid->NodeCount();
double tt1 = usecond();
LOG(Message) << " Contraction of PionFields took "<<(tt1-tt0)/1.0e6<< " seconds " << std::endl;
LOG(Message) << " Schur "<<(t_schur)/1.0e6<< " seconds " << std::endl;
LOG(Message) << " Contr WVmom "<<(t_contr_vwm)/1.0e6<< " seconds " << std::endl;
LOG(Message) << " Contr WV "<<(t_contr_vw)/1.0e6<< " seconds " << std::endl;
double t_kernel = t_contr_vwm;
LOG(Message) << " Arith "<<flops/(t_kernel)/1.0e3/nodes<< " Gflop/s / node " << std::endl;
LOG(Message) << " Arith "<<bytes/(t_kernel)/1.0e3/nodes<< " GB/s /node " << std::endl;
/////////////////////////////////////////////////////////////////////////
// Test: Build the pion correlator (two end)
// < PI_ij(t0) PI_ji (t0+t) >
/////////////////////////////////////////////////////////////////////////
std::vector<ComplexD> corrMom(nt,ComplexD(0.0));
for(int i=0;i<N_i;i++){
for(int j=0;j<N_j;j++){
int m=0; // first momentum
for(int t0=0;t0<nt;t0++){
for(int t=0;t<nt;t++){
int tt = (t0+t)%nt;
corrMom[t] += pionFieldWVmom_ij(m,t0,i,j)* pionFieldWVmom_ji(m,tt,j,i);
}}
}}
for(int t=0;t<nt;t++) corrMom[t] = corrMom[t]/ (double)nt;
for(int t=0;t<nt;t++) LOG(Message) << " C_vwm " << t << " " << corrMom[t]<<std::endl;
/////////////////////////////////////////////////////////////////////////
// Test: Build the pion correlator (two end) from zero mom contraction
// < PI_ij(t0) PI_ji (t0+t) >
/////////////////////////////////////////////////////////////////////////
std::vector<ComplexD> corr(nt,ComplexD(0.0));
for(int i=0;i<N_i;i++){
for(int j=0;j<N_j;j++){
for(int t0=0;t0<nt;t0++){
for(int t=0;t<nt;t++){
int tt = (t0+t)%nt;
corr[t] += pionFieldWV_ij(t0,i,j)* pionFieldWV_ji(tt,j,i);
}}
}}
for(int t=0;t<nt;t++) corr[t] = corr[t]/ (double)nt;
for(int t=0;t<nt;t++) LOG(Message) << " C_vw " << t << " " << corr[t]<<std::endl;
/////////////////////////////////////////////////////////////////////////
// Test: Build the pion correlator from zero mom contraction with revers
// charge flow
/////////////////////////////////////////////////////////////////////////
std::vector<ComplexD> corr_wwvv(nt,ComplexD(0.0));
wi.resize(N_i,grid);
vi.resize(N_i,grid);
wj.resize(N_j,grid);
vj.resize(N_j,grid);
for(int i =0;i < N_i;i++) a2a_i.return_v(i, tmp_5d, vi[i]);
for(int i =0;i < N_i;i++) a2a_i.return_w(i, tmp_5d, wi[i]);
for(int j =0;j < N_j;j++) a2a_j.return_v(j, tmp_5d, vj[j]);
for(int j =0;j < N_j;j++) a2a_j.return_w(j, tmp_5d, wj[j]);
Eigen::Tensor<ComplexD,3> pionFieldWW_ij (nt,N_i,N_j);
Eigen::Tensor<ComplexD,3> pionFieldVV_ji (nt,N_j,N_i);
Eigen::Tensor<ComplexD,3> pionFieldWW_ji (nt,N_j,N_i);
Eigen::Tensor<ComplexD,3> pionFieldVV_ij (nt,N_i,N_j);
A2Autils<FImpl>::PionFieldWW(pionFieldWW_ij, &wi[0], &wj[0],Tp);
A2Autils<FImpl>::PionFieldVV(pionFieldVV_ji, &vj[0], &vi[0],Tp);
A2Autils<FImpl>::PionFieldWW(pionFieldWW_ji, &wj[0], &wi[0],Tp);
A2Autils<FImpl>::PionFieldVV(pionFieldVV_ij, &vi[0], &vj[0],Tp);
for(int i=0;i<N_i;i++){
for(int j=0;j<N_j;j++){
for(int t0=0;t0<nt;t0++){
for(int t=0;t<nt;t++){
int tt = (t0+t)%nt;
corr_wwvv[t] += pionFieldWW_ij(t0,i,j)* pionFieldVV_ji(tt,j,i);
corr_wwvv[t] += pionFieldWW_ji(t0,j,i)* pionFieldVV_ij(tt,i,j);
}}
}}
for(int t=0;t<nt;t++) corr_wwvv[t] = corr_wwvv[t] / vol /2.0 ; // (ij+ji noise contribs if i!=j ).
for(int t=0;t<nt;t++) LOG(Message) << " C_wwvv " << t << " " << corr_wwvv[t]<<std::endl;
/////////////////////////////////////////////////////////////////////////
// This is only correct if there are NO low modes
// Use the "ii" case to construct possible Z wall source one end trick
/////////////////////////////////////////////////////////////////////////
std::vector<ComplexD> corr_z2(nt,ComplexD(0.0));
Eigen::Tensor<ComplexD,3> pionFieldWW (nt,N_i,N_i);
Eigen::Tensor<ComplexD,3> pionFieldVV (nt,N_i,N_i);
A2Autils<FImpl>::PionFieldWW(pionFieldWW, &wi[0], &wi[0],Tp);
A2Autils<FImpl>::PionFieldVV(pionFieldVV, &vi[0], &vi[0],Tp);
for(int i=0;i<N_i;i++){
for(int t0=0;t0<nt;t0++){
for(int t=0;t<nt;t++){
int tt = (t0+t)%nt;
corr_z2[t] += pionFieldWW(t0,i,i) * pionFieldVV(tt,i,i) /vol ;
}}
}
LOG(Message) << " C_z2 WARNING only correct if Nl == 0 "<<std::endl;
for(int t=0;t<nt;t++) LOG(Message) << " C_z2 " << t << " " << corr_z2[t]<<std::endl;
/////////////////////////////////////////////////////////////////////////
// Test: Build a bag contraction
/////////////////////////////////////////////////////////////////////////
Eigen::Tensor<ComplexD,2> DeltaF2_fig8 (nt,16);
Eigen::Tensor<ComplexD,2> DeltaF2_trtr (nt,16);
Eigen::Tensor<ComplexD,1> denom0 (nt);
Eigen::Tensor<ComplexD,1> denom1 (nt);
const int dT=16;
A2Autils<FImpl>::DeltaFeq2 (dT,dT,DeltaF2_fig8,DeltaF2_trtr,
denom0,denom1,
pionFieldWW_ij,&vi[0],&vj[0],Tp);
{
int g=0; // O_{VV+AA}
for(int t=0;t<nt;t++)
LOG(Message) << " Bag [" << t << ","<<g<<"] "
<< (DeltaF2_fig8(t,g)+DeltaF2_trtr(t,g))
/ ( 8.0/3.0 * denom0[t]*denom1[t])
<<std::endl;
}
/////////////////////////////////////////////////////////////////////////
// Test: Build a bag contraction the Z2 way
// Build a wall bag comparison assuming no low modes
/////////////////////////////////////////////////////////////////////////
LOG(Message) << " Bag_z2 WARNING only correct if Nl == 0 "<<std::endl;
int t0=0;
int t1=dT;
int Nl=0;
LatticePropagator Qd0(grid);
LatticePropagator Qd1(grid);
LatticePropagator Qs0(grid);
LatticePropagator Qs1(grid);
for(int s=0;s<4;s++){
for(int c=0;c<3;c++){
int idx0 = Nl+t0*12+s*3+c;
int idx1 = Nl+t1*12+s*3+c;
FermToProp<FImpl>(Qd0, vi[idx0], s, c);
FermToProp<FImpl>(Qd1, vi[idx1], s, c);
FermToProp<FImpl>(Qs0, vj[idx0], s, c);
FermToProp<FImpl>(Qs1, vj[idx1], s, c);
}
}
std::vector<Gamma::Algebra> gammas ( {
Gamma::Algebra::GammaX,
Gamma::Algebra::GammaY,
Gamma::Algebra::GammaZ,
Gamma::Algebra::GammaT,
Gamma::Algebra::GammaXGamma5,
Gamma::Algebra::GammaYGamma5,
Gamma::Algebra::GammaZGamma5,
Gamma::Algebra::GammaTGamma5,
Gamma::Algebra::Identity,
Gamma::Algebra::Gamma5,
Gamma::Algebra::SigmaXY,
Gamma::Algebra::SigmaXZ,
Gamma::Algebra::SigmaXT,
Gamma::Algebra::SigmaYZ,
Gamma::Algebra::SigmaYT,
Gamma::Algebra::SigmaZT
});
auto G5 = Gamma::Algebra::Gamma5;
LatticePropagator anti_d0 = adj( Gamma(G5) * Qd0 * Gamma(G5));
LatticePropagator anti_d1 = adj( Gamma(G5) * Qd1 * Gamma(G5));
LatticeComplex TR1(grid);
LatticeComplex TR2(grid);
LatticeComplex Wick1(grid);
LatticeComplex Wick2(grid);
LatticePropagator PR1(grid);
LatticePropagator PR2(grid);
PR1 = Qs0 * Gamma(G5) * anti_d0;
PR2 = Qs1 * Gamma(G5) * anti_d1;
for(int g=0;g<Nd*Nd;g++){
auto g1 = gammas[g];
Gamma G1 (g1);
TR1 = trace( PR1 * G1 );
TR2 = trace( PR2 * G1 );
Wick1 = TR1*TR2;
Wick2 = trace( PR1* G1 * PR2 * G1 );
std::vector<TComplex> C1;
std::vector<TComplex> C2;
std::vector<TComplex> C3;
sliceSum(Wick1,C1, Tp);
sliceSum(Wick2,C2, Tp);
sliceSum(TR1 ,C3, Tp);
/*
if(g<5){
for(int t=0;t<C1.size();t++){
LOG(Message) << " Wick1["<<g<<","<<t<< "] "<< C1[t]<<std::endl;
}
for(int t=0;t<C2.size();t++){
LOG(Message) << " Wick2["<<g<<","<<t<< "] "<< C2[t]<<std::endl;
}
}
if( (g==9) || (g==7) ){ // P and At in above ordering
for(int t=0;t<C3.size();t++){
LOG(Message) << " <G|P>["<<g<<","<<t<< "] "<< C3[t]<<std::endl;
}
}
*/
}
}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MContraction_A2APionField_hpp_

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MContraction/Baryon.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MContraction/Baryon.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TBaryon<FIMPL,FIMPL,FIMPL>;

View File

@ -68,7 +68,7 @@ public:
// constructor
TBaryon(const std::string name);
// destructor
virtual ~TBaryon(void) = default;
virtual ~TBaryon(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -79,7 +79,7 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(Baryon, ARG(TBaryon<FIMPL, FIMPL, FIMPL>), MContraction);
MODULE_REGISTER_TMP(Baryon, ARG(TBaryon<FIMPL, FIMPL, FIMPL>), MContraction);
/******************************************************************************
* TBaryon implementation *
@ -122,7 +122,6 @@ void TBaryon<FImpl1, FImpl2, FImpl3>::execute(void)
<< " quarks '" << par().q1 << "', '" << par().q2 << "', and '"
<< par().q3 << "'" << std::endl;
ResultWriter writer(RESULT_FILE_NAME(par().output));
auto &q1 = envGet(PropagatorField1, par().q1);
auto &q2 = envGet(PropagatorField2, par().q2);
auto &q3 = envGet(PropagatorField3, par().q2);
@ -131,7 +130,7 @@ void TBaryon<FImpl1, FImpl2, FImpl3>::execute(void)
// FIXME: do contractions
// write(writer, "meson", result);
// saveResult(par().output, "meson", result);
}
END_MODULE_NAMESPACE

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MContraction/DiscLoop.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MContraction/DiscLoop.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TDiscLoop<FIMPL>;

View File

@ -65,7 +65,7 @@ public:
// constructor
TDiscLoop(const std::string name);
// destructor
virtual ~TDiscLoop(void) = default;
virtual ~TDiscLoop(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -76,7 +76,7 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(DiscLoop, TDiscLoop<FIMPL>, MContraction);
MODULE_REGISTER_TMP(DiscLoop, TDiscLoop<FIMPL>, MContraction);
/******************************************************************************
* TDiscLoop implementation *
@ -119,7 +119,6 @@ void TDiscLoop<FImpl>::execute(void)
<< "' using '" << par().q_loop << "' with " << par().gamma
<< " insertion." << std::endl;
ResultWriter writer(RESULT_FILE_NAME(par().output));
auto &q_loop = envGet(PropagatorField, par().q_loop);
Gamma gamma(par().gamma);
std::vector<TComplex> buf;
@ -128,15 +127,13 @@ void TDiscLoop<FImpl>::execute(void)
envGetTmp(LatticeComplex, c);
c = trace(gamma*q_loop);
sliceSum(c, buf, Tp);
result.gamma = par().gamma;
result.corr.resize(buf.size());
for (unsigned int t = 0; t < buf.size(); ++t)
{
result.corr[t] = TensorRemove(buf[t]);
}
write(writer, "disc", result);
saveResult(par().output, "disc", result);
}
END_MODULE_NAMESPACE

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MContraction/Gamma3pt.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MContraction/Gamma3pt.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TGamma3pt<FIMPL,FIMPL,FIMPL>;

View File

@ -96,7 +96,7 @@ public:
// constructor
TGamma3pt(const std::string name);
// destructor
virtual ~TGamma3pt(void) = default;
virtual ~TGamma3pt(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -107,7 +107,7 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(Gamma3pt, ARG(TGamma3pt<FIMPL, FIMPL, FIMPL>), MContraction);
MODULE_REGISTER_TMP(Gamma3pt, ARG(TGamma3pt<FIMPL, FIMPL, FIMPL>), MContraction);
/******************************************************************************
* TGamma3pt implementation *
@ -153,7 +153,6 @@ void TGamma3pt<FImpl1, FImpl2, FImpl3>::execute(void)
// Initialise variables. q2 and q3 are normal propagators, q1 may be
// sink smeared.
ResultWriter writer(RESULT_FILE_NAME(par().output));
auto &q1 = envGet(SlicedPropagator1, par().q1);
auto &q2 = envGet(PropagatorField2, par().q2);
auto &q3 = envGet(PropagatorField2, par().q3);
@ -175,8 +174,7 @@ void TGamma3pt<FImpl1, FImpl2, FImpl3>::execute(void)
{
result.corr[t] = TensorRemove(buf[t]);
}
write(writer, "gamma3pt", result);
saveResult(par().output, "gamma3pt", result);
}
END_MODULE_NAMESPACE

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MContraction/Meson.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MContraction/Meson.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TMeson<FIMPL,FIMPL>;

View File

@ -45,8 +45,8 @@ BEGIN_HADRONS_NAMESPACE
- q1: input propagator 1 (string)
- q2: input propagator 2 (string)
- gammas: gamma products to insert at sink & source, pairs of gamma matrices
(space-separated strings) in angled brackets (i.e. <g_sink g_src>),
in a sequence (e.g. "<Gamma5 Gamma5><Gamma5 GammaT>").
(space-separated strings) in round brackets (i.e. (g_sink g_src)),
in a sequence (e.g. "(Gamma5 Gamma5)(Gamma5 GammaT)").
Special values: "all" - perform all possible contractions.
- sink: module to compute the sink to use in contraction (string).
@ -90,7 +90,7 @@ public:
// constructor
TMeson(const std::string name);
// destructor
virtual ~TMeson(void) = default;
virtual ~TMeson(void) {};
// dependencies/products
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -102,7 +102,7 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(Meson, ARG(TMeson<FIMPL, FIMPL>), MContraction);
MODULE_REGISTER_TMP(Meson, ARG(TMeson<FIMPL, FIMPL>), MContraction);
/******************************************************************************
* TMeson implementation *
@ -171,8 +171,7 @@ void TMeson<FImpl1, FImpl2>::execute(void)
LOG(Message) << "Computing meson contractions '" << getName() << "' using"
<< " quarks '" << par().q1 << "' and '" << par().q2 << "'"
<< std::endl;
ResultWriter writer(RESULT_FILE_NAME(par().output));
std::vector<TComplex> buf;
std::vector<Result> result;
Gamma g5(Gamma::Algebra::Gamma5);
@ -239,7 +238,7 @@ void TMeson<FImpl1, FImpl2>::execute(void)
}
}
}
write(writer, "meson", result);
saveResult(par().output, "meson", result);
}
END_MODULE_NAMESPACE

View File

@ -0,0 +1,8 @@
#include <Grid/Hadrons/Modules/MContraction/MesonFieldGamma.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TMesonFieldGamma<FIMPL>;
template class Grid::Hadrons::MContraction::TMesonFieldGamma<ZFIMPL>;

View File

@ -0,0 +1,269 @@
#ifndef Hadrons_MContraction_MesonFieldGamma_hpp_
#define Hadrons_MContraction_MesonFieldGamma_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
#include <Grid/Hadrons/AllToAllVectors.hpp>
#include <Grid/Hadrons/AllToAllReduction.hpp>
#include <Grid/Grid_Eigen_Dense.h>
#include <fstream>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* MesonFieldGamma *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MContraction)
class MesonFieldPar : Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(MesonFieldPar,
int, Nl,
int, N,
int, Nblock,
std::string, A2A1,
std::string, A2A2,
std::string, gammas,
std::string, output);
};
template <typename FImpl>
class TMesonFieldGamma : public Module<MesonFieldPar>
{
public:
FERM_TYPE_ALIASES(FImpl, );
SOLVER_TYPE_ALIASES(FImpl, );
typedef A2AModesSchurDiagTwo<typename FImpl::FermionField, FMat, Solver> A2ABase;
class Result : Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(Result,
Gamma::Algebra, gamma,
std::vector<std::vector<std::vector<ComplexD>>>, MesonField);
};
public:
// constructor
TMesonFieldGamma(const std::string name);
// destructor
virtual ~TMesonFieldGamma(void){};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
virtual void parseGammaString(std::vector<Gamma::Algebra> &gammaList);
virtual void vectorOfWs(std::vector<FermionField> &w, int i, int Nblock, FermionField &tmpw_5d, std::vector<FermionField> &vec_w);
virtual void vectorOfVs(std::vector<FermionField> &v, int j, int Nblock, FermionField &tmpv_5d, std::vector<FermionField> &vec_v);
virtual void gammaMult(std::vector<FermionField> &v, Gamma gamma);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER(MesonFieldGamma, ARG(TMesonFieldGamma<FIMPL>), MContraction);
MODULE_REGISTER(ZMesonFieldGamma, ARG(TMesonFieldGamma<ZFIMPL>), MContraction);
/******************************************************************************
* TMesonFieldGamma implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename FImpl>
TMesonFieldGamma<FImpl>::TMesonFieldGamma(const std::string name)
: Module<MesonFieldPar>(name)
{
}
// dependencies/products ///////////////////////////////////////////////////////
template <typename FImpl>
std::vector<std::string> TMesonFieldGamma<FImpl>::getInput(void)
{
std::vector<std::string> in = {par().A2A1 + "_class", par().A2A2 + "_class"};
in.push_back(par().A2A1 + "_w_high_4d");
in.push_back(par().A2A2 + "_v_high_4d");
return in;
}
template <typename FImpl>
std::vector<std::string> TMesonFieldGamma<FImpl>::getOutput(void)
{
std::vector<std::string> out = {};
return out;
}
template <typename FImpl>
void TMesonFieldGamma<FImpl>::parseGammaString(std::vector<Gamma::Algebra> &gammaList)
{
gammaList.clear();
// Determine gamma matrices to insert at source/sink.
if (par().gammas.compare("all") == 0)
{
// Do all contractions.
for (unsigned int i = 1; i < Gamma::nGamma; i += 2)
{
gammaList.push_back(((Gamma::Algebra)i));
}
}
else
{
// Parse individual contractions from input string.
gammaList = strToVec<Gamma::Algebra>(par().gammas);
}
}
template <typename FImpl>
void TMesonFieldGamma<FImpl>::vectorOfWs(std::vector<FermionField> &w, int i, int Nblock, FermionField &tmpw_5d, std::vector<FermionField> &vec_w)
{
for (unsigned int ni = 0; ni < Nblock; ni++)
{
vec_w[ni] = w[i + ni];
}
}
template <typename FImpl>
void TMesonFieldGamma<FImpl>::vectorOfVs(std::vector<FermionField> &v, int j, int Nblock, FermionField &tmpv_5d, std::vector<FermionField> &vec_v)
{
for (unsigned int nj = 0; nj < Nblock; nj++)
{
vec_v[nj] = v[j+nj];
}
}
template <typename FImpl>
void TMesonFieldGamma<FImpl>::gammaMult(std::vector<FermionField> &v, Gamma gamma)
{
int Nblock = v.size();
for (unsigned int nj = 0; nj < Nblock; nj++)
{
v[nj] = gamma * v[nj];
}
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename FImpl>
void TMesonFieldGamma<FImpl>::setup(void)
{
int nt = env().getDim(Tp);
int N = par().N;
int Nblock = par().Nblock;
int Ls_ = env().getObjectLs(par().A2A1 + "_class");
envTmpLat(FermionField, "tmpv_5d", Ls_);
envTmpLat(FermionField, "tmpw_5d", Ls_);
envTmp(std::vector<FermionField>, "w", 1, N, FermionField(env().getGrid(1)));
envTmp(std::vector<FermionField>, "v", 1, N, FermionField(env().getGrid(1)));
envTmp(Eigen::MatrixXcd, "MF", 1, Eigen::MatrixXcd::Zero(nt, N * N));
envTmp(std::vector<FermionField>, "w_block", 1, Nblock, FermionField(env().getGrid(1)));
envTmp(std::vector<FermionField>, "v_block", 1, Nblock, FermionField(env().getGrid(1)));
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TMesonFieldGamma<FImpl>::execute(void)
{
LOG(Message) << "Computing A2A meson field for gamma = " << par().gammas << ", taking w from " << par().A2A1 << " and v from " << par().A2A2 << std::endl;
int N = par().N;
int nt = env().getDim(Tp);
int Nblock = par().Nblock;
std::vector<Result> result;
std::vector<Gamma::Algebra> gammaResultList;
std::vector<Gamma> gammaList;
parseGammaString(gammaResultList);
result.resize(gammaResultList.size());
Gamma g5(Gamma::Algebra::Gamma5);
gammaList.resize(gammaResultList.size(), g5);
for (unsigned int i = 0; i < result.size(); ++i)
{
result[i].gamma = gammaResultList[i];
result[i].MesonField.resize(N, std::vector<std::vector<ComplexD>>(N, std::vector<ComplexD>(nt)));
Gamma gamma(gammaResultList[i]);
gammaList[i] = gamma;
}
auto &a2a1 = envGet(A2ABase, par().A2A1 + "_class");
auto &a2a2 = envGet(A2ABase, par().A2A2 + "_class");
envGetTmp(FermionField, tmpv_5d);
envGetTmp(FermionField, tmpw_5d);
envGetTmp(std::vector<FermionField>, v);
envGetTmp(std::vector<FermionField>, w);
LOG(Message) << "Finding v and w vectors for N = " << N << std::endl;
for (int i = 0; i < N; i++)
{
a2a2.return_v(i, tmpv_5d, v[i]);
a2a1.return_w(i, tmpw_5d, w[i]);
}
LOG(Message) << "Found v and w vectors for N = " << N << std::endl;
std::vector<std::vector<ComplexD>> MesonField_ij;
LOG(Message) << "Before blocked MFs, Nblock = " << Nblock << std::endl;
envGetTmp(std::vector<FermionField>, v_block);
envGetTmp(std::vector<FermionField>, w_block);
MesonField_ij.resize(Nblock * Nblock, std::vector<ComplexD>(nt));
envGetTmp(Eigen::MatrixXcd, MF);
LOG(Message) << "Before blocked MFs, Nblock = " << Nblock << std::endl;
for (unsigned int i = 0; i < N; i += Nblock)
{
vectorOfWs(w, i, Nblock, tmpw_5d, w_block);
for (unsigned int j = 0; j < N; j += Nblock)
{
vectorOfVs(v, j, Nblock, tmpv_5d, v_block);
for (unsigned int k = 0; k < result.size(); k++)
{
gammaMult(v_block, gammaList[k]);
sliceInnerProductMesonField(MesonField_ij, w_block, v_block, Tp);
for (unsigned int nj = 0; nj < Nblock; nj++)
{
for (unsigned int ni = 0; ni < Nblock; ni++)
{
MF.col((i + ni) + (j + nj) * N) = Eigen::VectorXcd::Map(&MesonField_ij[nj * Nblock + ni][0], MesonField_ij[nj * Nblock + ni].size());
}
}
}
}
if (i % 10 == 0)
{
LOG(Message) << "MF for i = " << i << " of " << N << std::endl;
}
}
LOG(Message) << "Before Global sum, Nblock = " << Nblock << std::endl;
v_block[0]._grid->GlobalSumVector(MF.data(), MF.size());
LOG(Message) << "After Global sum, Nblock = " << Nblock << std::endl;
for (unsigned int i = 0; i < N; i++)
{
for (unsigned int j = 0; j < N; j++)
{
for (unsigned int k = 0; k < result.size(); k++)
{
for (unsigned int t = 0; t < nt; t++)
{
result[k].MesonField[i][j][t] = MF.col(i + N * j)[t];
}
}
}
}
saveResult(par().output, "meson", result);
}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MContraction_MesonFieldGm_hpp_

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MContraction/WardIdentity.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MContraction/WardIdentity.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MContraction;
template class Grid::Hadrons::MContraction::TWardIdentity<FIMPL>;

View File

@ -71,7 +71,7 @@ public:
// constructor
TWardIdentity(const std::string name);
// destructor
virtual ~TWardIdentity(void) = default;
virtual ~TWardIdentity(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -84,7 +84,7 @@ private:
unsigned int Ls_;
};
MODULE_REGISTER_NS(WardIdentity, TWardIdentity<FIMPL>, MContraction);
MODULE_REGISTER_TMP(WardIdentity, TWardIdentity<FIMPL>, MContraction);
/******************************************************************************
* TWardIdentity implementation *
@ -119,7 +119,7 @@ void TWardIdentity<FImpl>::setup(void)
Ls_ = env().getObjectLs(par().q);
if (Ls_ != env().getObjectLs(par().action))
{
HADRON_ERROR(Size, "Ls mismatch between quark action and propagator");
HADRONS_ERROR(Size, "Ls mismatch between quark action and propagator");
}
envTmpLat(PropagatorField, "tmp");
envTmpLat(PropagatorField, "vector_WI");

View File

@ -97,7 +97,7 @@ public:\
/* constructor */ \
T##modname(const std::string name);\
/* destructor */ \
virtual ~T##modname(void) = default;\
virtual ~T##modname(void) {};\
/* dependency relation */ \
virtual std::vector<std::string> getInput(void);\
virtual std::vector<std::string> getOutput(void);\
@ -109,7 +109,7 @@ protected:\
/* execution */ \
virtual void execute(void);\
};\
MODULE_REGISTER_NS(modname, T##modname, MContraction);
MODULE_REGISTER(modname, T##modname, MContraction);
END_MODULE_NAMESPACE

View File

@ -104,7 +104,6 @@ void TWeakHamiltonianEye::execute(void)
<< par().q2 << ", '" << par().q3 << "' and '" << par().q4
<< "'." << std::endl;
ResultWriter writer(RESULT_FILE_NAME(par().output));
auto &q1 = envGet(SlicedPropagator, par().q1);
auto &q2 = envGet(PropagatorField, par().q2);
auto &q3 = envGet(PropagatorField, par().q3);
@ -147,5 +146,6 @@ void TWeakHamiltonianEye::execute(void)
SUM_MU(expbuf, E_body[mu]*E_loop[mu])
MAKE_DIAG(expbuf, corrbuf, result[E_diag], "HW_E")
write(writer, "HW_Eye", result);
// IO
saveResult(par().output, "HW_Eye", result);
}

View File

@ -104,7 +104,6 @@ void TWeakHamiltonianNonEye::execute(void)
<< par().q2 << ", '" << par().q3 << "' and '" << par().q4
<< "'." << std::endl;
ResultWriter writer(RESULT_FILE_NAME(par().output));
auto &q1 = envGet(PropagatorField, par().q1);
auto &q2 = envGet(PropagatorField, par().q2);
auto &q3 = envGet(PropagatorField, par().q3);
@ -144,5 +143,6 @@ void TWeakHamiltonianNonEye::execute(void)
SUM_MU(expbuf, W_i_side_loop[mu]*W_f_side_loop[mu])
MAKE_DIAG(expbuf, corrbuf, result[W_diag], "HW_W")
write(writer, "HW_NonEye", result);
// IO
saveResult(par().output, "HW_NonEye", result);
}

View File

@ -104,7 +104,6 @@ void TWeakNeutral4ptDisc::execute(void)
<< par().q2 << ", '" << par().q3 << "' and '" << par().q4
<< "'." << std::endl;
ResultWriter writer(RESULT_FILE_NAME(par().output));
auto &q1 = envGet(PropagatorField, par().q1);
auto &q2 = envGet(PropagatorField, par().q2);
auto &q3 = envGet(PropagatorField, par().q3);
@ -138,5 +137,6 @@ void TWeakNeutral4ptDisc::execute(void)
expbuf *= curr;
MAKE_DIAG(expbuf, corrbuf, result[neut_disc_2_diag], "HW_disc0_2")
write(writer, "HW_disc0", result);
// IO
saveResult(par().output, "HW_disc0", result);
}

View File

@ -0,0 +1,36 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MFermion/FreeProp.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
Author: Vera Guelpers <V.M.Guelpers@soton.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MFermion/FreeProp.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MFermion;
template class Grid::Hadrons::MFermion::TFreeProp<FIMPL>;

View File

@ -0,0 +1,187 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MFermion/FreeProp.hpp
Copyright (C) 2015-2018
Author: Vera Guelpers <V.M.Guelpers@soton.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#ifndef Hadrons_MFermion_FreeProp_hpp_
#define Hadrons_MFermion_FreeProp_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* FreeProp *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MFermion)
class FreePropPar: Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(FreePropPar,
std::string, source,
std::string, action,
double, mass,
std::string, twist);
};
template <typename FImpl>
class TFreeProp: public Module<FreePropPar>
{
public:
FG_TYPE_ALIASES(FImpl,);
public:
// constructor
TFreeProp(const std::string name);
// destructor
virtual ~TFreeProp(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
protected:
// setup
virtual void setup(void);
// execution
virtual void execute(void);
private:
unsigned int Ls_;
};
MODULE_REGISTER_TMP(FreeProp, TFreeProp<FIMPL>, MFermion);
/******************************************************************************
* TFreeProp implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename FImpl>
TFreeProp<FImpl>::TFreeProp(const std::string name)
: Module<FreePropPar>(name)
{}
// dependencies/products ///////////////////////////////////////////////////////
template <typename FImpl>
std::vector<std::string> TFreeProp<FImpl>::getInput(void)
{
std::vector<std::string> in = {par().source, par().action};
return in;
}
template <typename FImpl>
std::vector<std::string> TFreeProp<FImpl>::getOutput(void)
{
std::vector<std::string> out = {getName(), getName() + "_5d"};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename FImpl>
void TFreeProp<FImpl>::setup(void)
{
Ls_ = env().getObjectLs(par().action);
envCreateLat(PropagatorField, getName());
envTmpLat(FermionField, "source", Ls_);
envTmpLat(FermionField, "sol", Ls_);
envTmpLat(FermionField, "tmp");
if (Ls_ > 1)
{
envCreateLat(PropagatorField, getName() + "_5d", Ls_);
}
}
// execution ///////////////////////////////////////////////////////////////////
template <typename FImpl>
void TFreeProp<FImpl>::execute(void)
{
LOG(Message) << "Computing free fermion propagator '" << getName() << "'"
<< std::endl;
std::string propName = (Ls_ == 1) ? getName() : (getName() + "_5d");
auto &prop = envGet(PropagatorField, propName);
auto &fullSrc = envGet(PropagatorField, par().source);
auto &mat = envGet(FMat, par().action);
RealD mass = par().mass;
envGetTmp(FermionField, source);
envGetTmp(FermionField, sol);
envGetTmp(FermionField, tmp);
LOG(Message) << "Calculating a free Propagator with mass " << mass
<< " using the action '" << par().action
<< "' on source '" << par().source << "'" << std::endl;
for (unsigned int s = 0; s < Ns; ++s)
for (unsigned int c = 0; c < FImpl::Dimension; ++c)
{
LOG(Message) << "Calculation for spin= " << s << ", color= " << c
<< std::endl;
// source conversion for 4D sources
if (!env().isObject5d(par().source))
{
if (Ls_ == 1)
{
PropToFerm<FImpl>(source, fullSrc, s, c);
}
else
{
PropToFerm<FImpl>(tmp, fullSrc, s, c);
mat.ImportPhysicalFermionSource(tmp, source);
}
}
// source conversion for 5D sources
else
{
if (Ls_ != env().getObjectLs(par().source))
{
HADRONS_ERROR(Size, "Ls mismatch between quark action and source");
}
else
{
PropToFerm<FImpl>(source, fullSrc, s, c);
}
}
sol = zero;
std::vector<Real> twist = strToVec<Real>(par().twist);
if(twist.size() != Nd) HADRONS_ERROR(Size, "number of twist angles does not match number of dimensions");
mat.FreePropagator(source,sol,mass,twist);
FermToProp<FImpl>(prop, sol, s, c);
// create 4D propagators from 5D one if necessary
if (Ls_ > 1)
{
PropagatorField &p4d = envGet(PropagatorField, getName());
mat.ExportPhysicalFermionSolution(sol, tmp);
FermToProp<FImpl>(p4d, tmp, s, c);
}
}
}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MFermion_FreeProp_hpp_

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MFermion/GaugeProp.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MFermion/GaugeProp.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MFermion;
template class Grid::Hadrons::MFermion::TGaugeProp<FIMPL>;
template class Grid::Hadrons::MFermion::TGaugeProp<ZFIMPL>;

View File

@ -7,7 +7,9 @@ Source file: extras/Hadrons/Modules/MFermion/GaugeProp.hpp
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
Author: Guido Cossu <guido.cossu@ed.ac.uk>
Author: Lanny91 <andrew.lawson@gmail.com>
Author: pretidav <david.preti@csic.es>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@ -33,30 +35,10 @@ See the full license in the file "LICENSE" in the top level distribution directo
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
#include <Grid/Hadrons/Solver.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* 5D -> 4D and 4D -> 5D conversions. *
******************************************************************************/
template<class vobj> // Note that 5D object is modified.
inline void make_4D(Lattice<vobj> &in_5d, Lattice<vobj> &out_4d, int Ls)
{
axpby_ssp_pminus(in_5d, 0., in_5d, 1., in_5d, 0, 0);
axpby_ssp_pplus(in_5d, 1., in_5d, 1., in_5d, 0, Ls-1);
ExtractSlice(out_4d, in_5d, 0, 0);
}
template<class vobj>
inline void make_5D(Lattice<vobj> &in_4d, Lattice<vobj> &out_5d, int Ls)
{
out_5d = zero;
InsertSlice(in_4d, out_5d, 0, 0);
InsertSlice(in_4d, out_5d, Ls-1, 0);
axpby_ssp_pplus(out_5d, 0., out_5d, 1., out_5d, 0, 0);
axpby_ssp_pminus(out_5d, 0., out_5d, 1., out_5d, Ls-1, Ls-1);
}
/******************************************************************************
* GaugeProp *
******************************************************************************/
@ -74,12 +56,13 @@ template <typename FImpl>
class TGaugeProp: public Module<GaugePropPar>
{
public:
FGS_TYPE_ALIASES(FImpl,);
FG_TYPE_ALIASES(FImpl,);
SOLVER_TYPE_ALIASES(FImpl,);
public:
// constructor
TGaugeProp(const std::string name);
// destructor
virtual ~TGaugeProp(void) = default;
virtual ~TGaugeProp(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -90,10 +73,12 @@ protected:
virtual void execute(void);
private:
unsigned int Ls_;
SolverFn *solver_{nullptr};
Solver *solver_{nullptr};
};
MODULE_REGISTER_NS(GaugeProp, TGaugeProp<FIMPL>, MFermion);
MODULE_REGISTER_TMP(GaugeProp, TGaugeProp<FIMPL>, MFermion);
MODULE_REGISTER_TMP(ZGaugeProp, TGaugeProp<ZFIMPL>, MFermion);
/******************************************************************************
* TGaugeProp implementation *
******************************************************************************/
@ -145,7 +130,8 @@ void TGaugeProp<FImpl>::execute(void)
std::string propName = (Ls_ == 1) ? getName() : (getName() + "_5d");
auto &prop = envGet(PropagatorField, propName);
auto &fullSrc = envGet(PropagatorField, par().source);
auto &solver = envGet(SolverFn, par().solver);
auto &solver = envGet(Solver, par().solver);
auto &mat = solver.getFMat();
envGetTmp(FermionField, source);
envGetTmp(FermionField, sol);
@ -153,11 +139,12 @@ void TGaugeProp<FImpl>::execute(void)
LOG(Message) << "Inverting using solver '" << par().solver
<< "' on source '" << par().source << "'" << std::endl;
for (unsigned int s = 0; s < Ns; ++s)
for (unsigned int c = 0; c < FImpl::Dimension; ++c)
for (unsigned int c = 0; c < FImpl::Dimension; ++c)
{
LOG(Message) << "Inversion for spin= " << s << ", color= " << c
<< std::endl;
// source conversion for 4D sources
LOG(Message) << "Import source" << std::endl;
if (!env().isObject5d(par().source))
{
if (Ls_ == 1)
@ -167,7 +154,7 @@ void TGaugeProp<FImpl>::execute(void)
else
{
PropToFerm<FImpl>(tmp, fullSrc, s, c);
make_5D(tmp, source, Ls_);
mat.ImportPhysicalFermionSource(tmp, source);
}
}
// source conversion for 5D sources
@ -175,21 +162,23 @@ void TGaugeProp<FImpl>::execute(void)
{
if (Ls_ != env().getObjectLs(par().source))
{
HADRON_ERROR(Size, "Ls mismatch between quark action and source");
HADRONS_ERROR(Size, "Ls mismatch between quark action and source");
}
else
{
PropToFerm<FImpl>(source, fullSrc, s, c);
}
}
LOG(Message) << "Solve" << std::endl;
sol = zero;
solver(sol, source);
LOG(Message) << "Export solution" << std::endl;
FermToProp<FImpl>(prop, sol, s, c);
// create 4D propagators from 5D one if necessary
if (Ls_ > 1)
{
PropagatorField &p4d = envGet(PropagatorField, getName());
make_4D(sol, tmp, Ls_);
mat.ExportPhysicalFermionSolution(sol, tmp);
FermToProp<FImpl>(p4d, tmp, s, c);
}
}

View File

@ -4,9 +4,11 @@ Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MGauge/FundtoHirep.cc
Copyright (C) 2015
Copyright (C) 2016
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
Author: Guido Cossu <guido.cossu@ed.ac.uk>
Author: pretidav <david.preti@csic.es>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@ -42,7 +44,8 @@ TFundtoHirep<Rep>::TFundtoHirep(const std::string name)
template <class Rep>
std::vector<std::string> TFundtoHirep<Rep>::getInput(void)
{
std::vector<std::string> in;
std::vector<std::string> in = {par().gaugeconf};
return in;
}
@ -50,6 +53,7 @@ template <class Rep>
std::vector<std::string> TFundtoHirep<Rep>::getOutput(void)
{
std::vector<std::string> out = {getName()};
return out;
}
@ -57,19 +61,19 @@ std::vector<std::string> TFundtoHirep<Rep>::getOutput(void)
template <typename Rep>
void TFundtoHirep<Rep>::setup(void)
{
env().template registerLattice<typename Rep::LatticeField>(getName());
envCreateLat(Rep::LatticeField, getName());
}
// execution ///////////////////////////////////////////////////////////////////
template <class Rep>
void TFundtoHirep<Rep>::execute(void)
{
auto &U = *env().template getObject<LatticeGaugeField>(par().gaugeconf);
LOG(Message) << "Transforming Representation" << std::endl;
auto &U = envGet(LatticeGaugeField, par().gaugeconf);
auto &URep = envGet(Rep::LatticeField, getName());
Rep TargetRepresentation(U._grid);
TargetRepresentation.update_representation(U);
typename Rep::LatticeField &URep = *env().template createLattice<typename Rep::LatticeField>(getName());
URep = TargetRepresentation.U;
}

View File

@ -4,11 +4,10 @@ Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MGauge/FundtoHirep.hpp
Copyright (C) 2015
Copyright (C) 2016
Copyright (C) 2015-2018
Author: David Preti <david.preti@to.infn.it>
Guido Cossu <guido.cossu@ed.ac.uk>
Author: Antonin Portelli <antonin.portelli@me.com>
Author: pretidav <david.preti@csic.es>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@ -56,7 +55,7 @@ public:
// constructor
TFundtoHirep(const std::string name);
// destructor
virtual ~TFundtoHirep(void) = default;
virtual ~TFundtoHirep(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -66,9 +65,9 @@ public:
void execute(void);
};
//MODULE_REGISTER_NS(FundtoAdjoint, TFundtoHirep<AdjointRepresentation>, MGauge);
//MODULE_REGISTER_NS(FundtoTwoIndexSym, TFundtoHirep<TwoIndexSymmetricRepresentation>, MGauge);
//MODULE_REGISTER_NS(FundtoTwoIndexAsym, TFundtoHirep<TwoIndexAntiSymmetricRepresentation>, MGauge);
//MODULE_REGISTER_TMP(FundtoAdjoint, TFundtoHirep<AdjointRepresentation>, MGauge);
//MODULE_REGISTER_TMP(FundtoTwoIndexSym, TFundtoHirep<TwoIndexSymmetricRepresentation>, MGauge);
//MODULE_REGISTER_TMP(FundtoTwoIndexAsym, TFundtoHirep<TwoIndexAntiSymmetricRepresentation>, MGauge);
END_MODULE_NAMESPACE

View File

@ -46,7 +46,7 @@ public:
// constructor
TRandom(const std::string name);
// destructor
virtual ~TRandom(void) = default;
virtual ~TRandom(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -57,7 +57,7 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(Random, TRandom, MGauge);
MODULE_REGISTER(Random, TRandom, MGauge);
END_MODULE_NAMESPACE

View File

@ -7,6 +7,8 @@ Source file: extras/Hadrons/Modules/MGauge/StochEm.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
Author: James Harrison <j.harrison@soton.ac.uk>
Author: Vera Guelpers <vmg1n14@soton.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@ -57,25 +59,24 @@ std::vector<std::string> TStochEm::getOutput(void)
// setup ///////////////////////////////////////////////////////////////////////
void TStochEm::setup(void)
{
if (!env().hasCreatedObject("_" + getName() + "_weight"))
{
envCacheLat(EmComp, "_" + getName() + "_weight");
}
weightDone_ = env().hasCreatedObject("_" + getName() + "_weight");
envCacheLat(EmComp, "_" + getName() + "_weight");
envCreateLat(EmField, getName());
}
// execution ///////////////////////////////////////////////////////////////////
void TStochEm::execute(void)
{
LOG(Message) << "Generating stochatic EM potential..." << std::endl;
LOG(Message) << "Generating stochastic EM potential..." << std::endl;
PhotonR photon(par().gauge, par().zmScheme);
std::vector<Real> improvements = strToVec<Real>(par().improvement);
PhotonR photon(par().gauge, par().zmScheme, improvements, par().G0_qedInf);
auto &a = envGet(EmField, getName());
auto &w = envGet(EmComp, "_" + getName() + "_weight");
if (!env().hasCreatedObject("_" + getName() + "_weight"))
if (!weightDone_)
{
LOG(Message) << "Caching stochatic EM potential weight (gauge: "
LOG(Message) << "Caching stochastic EM potential weight (gauge: "
<< par().gauge << ", zero-mode scheme: "
<< par().zmScheme << ")..." << std::endl;
photon.StochasticWeight(w);

View File

@ -7,6 +7,8 @@ Source file: extras/Hadrons/Modules/MGauge/StochEm.hpp
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
Author: James Harrison <j.harrison@soton.ac.uk>
Author: Vera Guelpers <vmg1n14@soton.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@ -44,7 +46,9 @@ class StochEmPar: Serializable
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(StochEmPar,
PhotonR::Gauge, gauge,
PhotonR::ZmScheme, zmScheme);
PhotonR::ZmScheme, zmScheme,
std::string, improvement,
Real, G0_qedInf);
};
class TStochEm: public Module<StochEmPar>
@ -56,7 +60,7 @@ public:
// constructor
TStochEm(const std::string name);
// destructor
virtual ~TStochEm(void) = default;
virtual ~TStochEm(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -65,9 +69,11 @@ protected:
virtual void setup(void);
// execution
virtual void execute(void);
private:
bool weightDone_;
};
MODULE_REGISTER_NS(StochEm, TStochEm, MGauge);
MODULE_REGISTER(StochEm, TStochEm, MGauge);
END_MODULE_NAMESPACE

View File

@ -0,0 +1,7 @@
#include <Grid/Hadrons/Modules/MGauge/StoutSmearing.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MGauge;
template class Grid::Hadrons::MGauge::TStoutSmearing<GIMPL>;

View File

@ -0,0 +1,104 @@
#ifndef Hadrons_MGauge_StoutSmearing_hpp_
#define Hadrons_MGauge_StoutSmearing_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* Stout smearing *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MGauge)
class StoutSmearingPar: Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(StoutSmearingPar,
std::string, gauge,
unsigned int, steps,
double, rho);
};
template <typename GImpl>
class TStoutSmearing: public Module<StoutSmearingPar>
{
public:
typedef typename GImpl::Field GaugeField;
public:
// constructor
TStoutSmearing(const std::string name);
// destructor
virtual ~TStoutSmearing(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER_TMP(StoutSmearing, TStoutSmearing<GIMPL>, MGauge);
/******************************************************************************
* TStoutSmearing implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename GImpl>
TStoutSmearing<GImpl>::TStoutSmearing(const std::string name)
: Module<StoutSmearingPar>(name)
{}
// dependencies/products ///////////////////////////////////////////////////////
template <typename GImpl>
std::vector<std::string> TStoutSmearing<GImpl>::getInput(void)
{
std::vector<std::string> in = {par().gauge};
return in;
}
template <typename GImpl>
std::vector<std::string> TStoutSmearing<GImpl>::getOutput(void)
{
std::vector<std::string> out = {getName()};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename GImpl>
void TStoutSmearing<GImpl>::setup(void)
{
envCreateLat(GaugeField, getName());
envTmpLat(GaugeField, "buf");
}
// execution ///////////////////////////////////////////////////////////////////
template <typename GImpl>
void TStoutSmearing<GImpl>::execute(void)
{
LOG(Message) << "Smearing '" << par().gauge << "' with " << par().steps
<< " step" << ((par().steps > 1) ? "s" : "")
<< " of stout smearing and rho= " << par().rho << std::endl;
Smear_Stout<GImpl> smearer(par().rho);
auto &U = envGet(GaugeField, par().gauge);
auto &Usmr = envGet(GaugeField, getName());
envGetTmp(GaugeField, buf);
buf = U;
for (unsigned int n = 0; n < par().steps; ++n)
{
smearer.smear(Usmr, buf);
buf = Usmr;
}
}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MGauge_StoutSmearing_hpp_

View File

@ -46,7 +46,7 @@ public:
// constructor
TUnit(const std::string name);
// destructor
virtual ~TUnit(void) = default;
virtual ~TUnit(void) {};
// dependencies/products
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -57,7 +57,7 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(Unit, TUnit, MGauge);
MODULE_REGISTER(Unit, TUnit, MGauge);
END_MODULE_NAMESPACE

View File

@ -0,0 +1,69 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MGauge/StochEm.cc
Copyright (C) 2015
Copyright (C) 2016
Author: James Harrison <j.harrison@soton.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MGauge/UnitEm.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MGauge;
/******************************************************************************
* TStochEm implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
TUnitEm::TUnitEm(const std::string name)
: Module<NoPar>(name)
{}
// dependencies/products ///////////////////////////////////////////////////////
std::vector<std::string> TUnitEm::getInput(void)
{
return std::vector<std::string>();
}
std::vector<std::string> TUnitEm::getOutput(void)
{
std::vector<std::string> out = {getName()};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
void TUnitEm::setup(void)
{
envCreateLat(EmField, getName());
}
// execution ///////////////////////////////////////////////////////////////////
void TUnitEm::execute(void)
{
PhotonR photon(0, 0); // Just chose arbitrary input values here
auto &a = envGet(EmField, getName());
LOG(Message) << "Generating unit EM potential..." << std::endl;
photon.UnitField(a);
}

View File

@ -0,0 +1,69 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MGauge/StochEm.hpp
Copyright (C) 2015
Copyright (C) 2016
Author: James Harrison <j.harrison@soton.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#ifndef Hadrons_MGauge_UnitEm_hpp_
#define Hadrons_MGauge_UnitEm_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* StochEm *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MGauge)
class TUnitEm: public Module<NoPar>
{
public:
typedef PhotonR::GaugeField EmField;
typedef PhotonR::GaugeLinkField EmComp;
public:
// constructor
TUnitEm(const std::string name);
// destructor
virtual ~TUnitEm(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
protected:
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER(UnitEm, TUnitEm, MGauge);
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MGauge_UnitEm_hpp_

View File

@ -0,0 +1,40 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MIO/LoadBinary.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MIO/LoadBinary.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MIO;
template class Grid::Hadrons::MIO::TLoadBinary<GIMPL>;
template class Grid::Hadrons::MIO::TLoadBinary<ScalarNxNAdjImplR<2>>;
template class Grid::Hadrons::MIO::TLoadBinary<ScalarNxNAdjImplR<3>>;
template class Grid::Hadrons::MIO::TLoadBinary<ScalarNxNAdjImplR<4>>;
template class Grid::Hadrons::MIO::TLoadBinary<ScalarNxNAdjImplR<5>>;
template class Grid::Hadrons::MIO::TLoadBinary<ScalarNxNAdjImplR<6>>;

View File

@ -61,7 +61,7 @@ public:
// constructor
TLoadBinary(const std::string name);
// destructor
virtual ~TLoadBinary(void) = default;
virtual ~TLoadBinary(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -71,12 +71,12 @@ public:
virtual void execute(void);
};
MODULE_REGISTER_NS(LoadBinary, TLoadBinary<GIMPL>, MIO);
MODULE_REGISTER_NS(LoadBinaryScalarSU2, TLoadBinary<ScalarNxNAdjImplR<2>>, MIO);
MODULE_REGISTER_NS(LoadBinaryScalarSU3, TLoadBinary<ScalarNxNAdjImplR<3>>, MIO);
MODULE_REGISTER_NS(LoadBinaryScalarSU4, TLoadBinary<ScalarNxNAdjImplR<4>>, MIO);
MODULE_REGISTER_NS(LoadBinaryScalarSU5, TLoadBinary<ScalarNxNAdjImplR<5>>, MIO);
MODULE_REGISTER_NS(LoadBinaryScalarSU6, TLoadBinary<ScalarNxNAdjImplR<6>>, MIO);
MODULE_REGISTER_TMP(LoadBinary, TLoadBinary<GIMPL>, MIO);
MODULE_REGISTER_TMP(LoadBinaryScalarSU2, TLoadBinary<ScalarNxNAdjImplR<2>>, MIO);
MODULE_REGISTER_TMP(LoadBinaryScalarSU3, TLoadBinary<ScalarNxNAdjImplR<3>>, MIO);
MODULE_REGISTER_TMP(LoadBinaryScalarSU4, TLoadBinary<ScalarNxNAdjImplR<4>>, MIO);
MODULE_REGISTER_TMP(LoadBinaryScalarSU5, TLoadBinary<ScalarNxNAdjImplR<5>>, MIO);
MODULE_REGISTER_TMP(LoadBinaryScalarSU6, TLoadBinary<ScalarNxNAdjImplR<6>>, MIO);
/******************************************************************************
* TLoadBinary implementation *

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MIO/LoadCoarseEigenPack.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MIO/LoadCoarseEigenPack.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MIO;
template class Grid::Hadrons::MIO::TLoadCoarseEigenPack<CoarseFermionEigenPack<FIMPL,HADRONS_DEFAULT_LANCZOS_NBASIS>>;

View File

@ -0,0 +1,135 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MIO/LoadCoarseEigenPack.hpp
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#ifndef Hadrons_MIO_LoadCoarseEigenPack_hpp_
#define Hadrons_MIO_LoadCoarseEigenPack_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
#include <Grid/Hadrons/EigenPack.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* Load local coherence eigen vectors/values package *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MIO)
class LoadCoarseEigenPackPar: Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(LoadCoarseEigenPackPar,
std::string, filestem,
bool, multiFile,
unsigned int, sizeFine,
unsigned int, sizeCoarse,
unsigned int, Ls,
std::vector<int>, blockSize);
};
template <typename Pack>
class TLoadCoarseEigenPack: public Module<LoadCoarseEigenPackPar>
{
public:
typedef CoarseEigenPack<typename Pack::Field, typename Pack::CoarseField> BasePack;
template <typename vtype>
using iImplScalar = iScalar<iScalar<iScalar<vtype>>>;
typedef iImplScalar<typename Pack::Field::vector_type> SiteComplex;
public:
// constructor
TLoadCoarseEigenPack(const std::string name);
// destructor
virtual ~TLoadCoarseEigenPack(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER_TMP(LoadCoarseFermionEigenPack, ARG(TLoadCoarseEigenPack<CoarseFermionEigenPack<FIMPL, HADRONS_DEFAULT_LANCZOS_NBASIS>>), MIO);
/******************************************************************************
* TLoadCoarseEigenPack implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename Pack>
TLoadCoarseEigenPack<Pack>::TLoadCoarseEigenPack(const std::string name)
: Module<LoadCoarseEigenPackPar>(name)
{}
// dependencies/products ///////////////////////////////////////////////////////
template <typename Pack>
std::vector<std::string> TLoadCoarseEigenPack<Pack>::getInput(void)
{
std::vector<std::string> in;
return in;
}
template <typename Pack>
std::vector<std::string> TLoadCoarseEigenPack<Pack>::getOutput(void)
{
std::vector<std::string> out = {getName()};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename Pack>
void TLoadCoarseEigenPack<Pack>::setup(void)
{
env().createGrid(par().Ls);
env().createCoarseGrid(par().blockSize, par().Ls);
envCreateDerived(BasePack, Pack, getName(), par().Ls, par().sizeFine,
par().sizeCoarse, env().getRbGrid(par().Ls),
env().getCoarseGrid(par().blockSize, par().Ls));
}
// execution ///////////////////////////////////////////////////////////////////
template <typename Pack>
void TLoadCoarseEigenPack<Pack>::execute(void)
{
auto cg = env().getCoarseGrid(par().blockSize, par().Ls);
auto &epack = envGetDerived(BasePack, Pack, getName());
Lattice<SiteComplex> dummy(cg);
epack.read(par().filestem, par().multiFile, vm().getTrajectory());
LOG(Message) << "Block Gramm-Schmidt pass 1"<< std::endl;
blockOrthogonalise(dummy, epack.evec);
LOG(Message) << "Block Gramm-Schmidt pass 2"<< std::endl;
blockOrthogonalise(dummy, epack.evec);
}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MIO_LoadCoarseEigenPack_hpp_

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MIO/LoadEigenPack.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MIO/LoadEigenPack.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MIO;
template class Grid::Hadrons::MIO::TLoadEigenPack<FermionEigenPack<FIMPL>>;

View File

@ -0,0 +1,123 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MIO/LoadEigenPack.hpp
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#ifndef Hadrons_MIO_LoadEigenPack_hpp_
#define Hadrons_MIO_LoadEigenPack_hpp_
#include <Grid/Hadrons/Global.hpp>
#include <Grid/Hadrons/Module.hpp>
#include <Grid/Hadrons/ModuleFactory.hpp>
#include <Grid/Hadrons/EigenPack.hpp>
BEGIN_HADRONS_NAMESPACE
/******************************************************************************
* Load eigen vectors/values package *
******************************************************************************/
BEGIN_MODULE_NAMESPACE(MIO)
class LoadEigenPackPar: Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(LoadEigenPackPar,
std::string, filestem,
bool, multiFile,
unsigned int, size,
unsigned int, Ls);
};
template <typename Pack>
class TLoadEigenPack: public Module<LoadEigenPackPar>
{
public:
typedef EigenPack<typename Pack::Field> BasePack;
public:
// constructor
TLoadEigenPack(const std::string name);
// destructor
virtual ~TLoadEigenPack(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
// setup
virtual void setup(void);
// execution
virtual void execute(void);
};
MODULE_REGISTER_TMP(LoadFermionEigenPack, TLoadEigenPack<FermionEigenPack<FIMPL>>, MIO);
/******************************************************************************
* TLoadEigenPack implementation *
******************************************************************************/
// constructor /////////////////////////////////////////////////////////////////
template <typename Pack>
TLoadEigenPack<Pack>::TLoadEigenPack(const std::string name)
: Module<LoadEigenPackPar>(name)
{}
// dependencies/products ///////////////////////////////////////////////////////
template <typename Pack>
std::vector<std::string> TLoadEigenPack<Pack>::getInput(void)
{
std::vector<std::string> in;
return in;
}
template <typename Pack>
std::vector<std::string> TLoadEigenPack<Pack>::getOutput(void)
{
std::vector<std::string> out = {getName()};
return out;
}
// setup ///////////////////////////////////////////////////////////////////////
template <typename Pack>
void TLoadEigenPack<Pack>::setup(void)
{
env().createGrid(par().Ls);
envCreateDerived(BasePack, Pack, getName(), par().Ls, par().size,
env().getRbGrid(par().Ls));
}
// execution ///////////////////////////////////////////////////////////////////
template <typename Pack>
void TLoadEigenPack<Pack>::execute(void)
{
auto &epack = envGetDerived(BasePack, Pack, getName());
epack.read(par().filestem, par().multiFile, vm().getTrajectory());
epack.eval.resize(par().size);
}
END_MODULE_NAMESPACE
END_HADRONS_NAMESPACE
#endif // Hadrons_MIO_LoadEigenPack_hpp_

View File

@ -71,6 +71,4 @@ void TLoadNersc::execute(void)
auto &U = envGet(LatticeGaugeField, getName());
NerscIO::readConfiguration(U, header, fileName);
LOG(Message) << "NERSC header:" << std::endl;
dump_meta_data(header, LOG(Message));
}

View File

@ -52,7 +52,7 @@ public:
// constructor
TLoadNersc(const std::string name);
// destructor
virtual ~TLoadNersc(void) = default;
virtual ~TLoadNersc(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -62,7 +62,7 @@ public:
virtual void execute(void);
};
MODULE_REGISTER_NS(LoadNersc, TLoadNersc, MIO);
MODULE_REGISTER(LoadNersc, TLoadNersc, MIO);
END_MODULE_NAMESPACE

View File

@ -0,0 +1,35 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: extras/Hadrons/Modules/MLoop/NoiseLoop.cc
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/Hadrons/Modules/MLoop/NoiseLoop.hpp>
using namespace Grid;
using namespace Hadrons;
using namespace MLoop;
template class Grid::Hadrons::MLoop::TNoiseLoop<FIMPL>;

View File

@ -71,7 +71,7 @@ public:
// constructor
TNoiseLoop(const std::string name);
// destructor
virtual ~TNoiseLoop(void) = default;
virtual ~TNoiseLoop(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -82,7 +82,7 @@ protected:
virtual void execute(void);
};
MODULE_REGISTER_NS(NoiseLoop, TNoiseLoop<FIMPL>, MLoop);
MODULE_REGISTER_TMP(NoiseLoop, TNoiseLoop<FIMPL>, MLoop);
/******************************************************************************
* TNoiseLoop implementation *

View File

@ -51,7 +51,8 @@ std::vector<std::string> TChargedProp::getInput(void)
std::vector<std::string> TChargedProp::getOutput(void)
{
std::vector<std::string> out = {getName()};
std::vector<std::string> out = {getName(), getName()+"_0", getName()+"_Q",
getName()+"_Sun", getName()+"_Tad"};
return out;
}
@ -66,18 +67,27 @@ void TChargedProp::setup(void)
phaseName_.push_back("_shiftphase_" + std::to_string(mu));
}
GFSrcName_ = getName() + "_DinvSrc";
prop0Name_ = getName() + "_0";
propQName_ = getName() + "_Q";
propSunName_ = getName() + "_Sun";
propTadName_ = getName() + "_Tad";
fftName_ = getName() + "_fft";
freeMomPropDone_ = env().hasCreatedObject(freeMomPropName_);
GFSrcDone_ = env().hasCreatedObject(GFSrcName_);
phasesDone_ = env().hasCreatedObject(phaseName_[0]);
prop0Done_ = env().hasCreatedObject(prop0Name_);
envCacheLat(ScalarField, freeMomPropName_);
for (unsigned int mu = 0; mu < env().getNd(); ++mu)
{
envCacheLat(ScalarField, phaseName_[mu]);
}
envCacheLat(ScalarField, GFSrcName_);
envCacheLat(ScalarField, prop0Name_);
envCreateLat(ScalarField, getName());
envCreateLat(ScalarField, propQName_);
envCreateLat(ScalarField, propSunName_);
envCreateLat(ScalarField, propTadName_);
envTmpLat(ScalarField, "buf");
envTmpLat(ScalarField, "result");
envTmpLat(ScalarField, "Amu");
@ -95,80 +105,125 @@ void TChargedProp::execute(void)
<< " (mass= " << par().mass
<< ", charge= " << par().charge << ")..." << std::endl;
auto &prop = envGet(ScalarField, getName());
auto &GFSrc = envGet(ScalarField, GFSrcName_);
auto &G = envGet(ScalarField, freeMomPropName_);
auto &fft = envGet(FFT, fftName_);
double q = par().charge;
envGetTmp(ScalarField, result);
envGetTmp(ScalarField, buf);
auto &prop = envGet(ScalarField, getName());
auto &prop0 = envGet(ScalarField, prop0Name_);
auto &propQ = envGet(ScalarField, propQName_);
auto &propSun = envGet(ScalarField, propSunName_);
auto &propTad = envGet(ScalarField, propTadName_);
auto &GFSrc = envGet(ScalarField, GFSrcName_);
auto &G = envGet(ScalarField, freeMomPropName_);
auto &fft = envGet(FFT, fftName_);
double q = par().charge;
envGetTmp(ScalarField, buf);
// G*F*Src
prop = GFSrc;
// -G*momD1*G*F*Src (momD1 = F*D1*Finv)
propQ = GFSrc;
momD1(propQ, fft);
propQ = -G*propQ;
propSun = -propQ;
fft.FFT_dim(propQ, propQ, env().getNd()-1, FFT::backward);
// - q*G*momD1*G*F*Src (momD1 = F*D1*Finv)
buf = GFSrc;
momD1(buf, fft);
buf = G*buf;
prop = prop - q*buf;
// G*momD1*G*momD1*G*F*Src (here buf = G*momD1*G*F*Src)
momD1(propSun, fft);
propSun = G*propSun;
fft.FFT_dim(propSun, propSun, env().getNd()-1, FFT::backward);
// + q^2*G*momD1*G*momD1*G*F*Src (here buf = G*momD1*G*F*Src)
momD1(buf, fft);
prop = prop + q*q*G*buf;
// - q^2*G*momD2*G*F*Src (momD2 = F*D2*Finv)
buf = GFSrc;
momD2(buf, fft);
prop = prop - q*q*G*buf;
// final FT
fft.FFT_all_dim(prop, prop, FFT::backward);
// -G*momD2*G*F*Src (momD2 = F*D2*Finv)
propTad = GFSrc;
momD2(propTad, fft);
propTad = -G*propTad;
fft.FFT_dim(propTad, propTad, env().getNd()-1, FFT::backward);
// full charged scalar propagator
fft.FFT_dim(buf, GFSrc, env().getNd()-1, FFT::backward);
prop = buf + q*propQ + q*q*propSun + q*q*propTad;
// OUTPUT IF NECESSARY
if (!par().output.empty())
{
std::string filename = par().output + "." +
std::to_string(vm().getTrajectory());
LOG(Message) << "Saving zero-momentum projection to '"
<< filename << "'..." << std::endl;
ResultWriter writer(RESULT_FILE_NAME(par().output));
std::vector<TComplex> vecBuf;
std::vector<Complex> result;
sliceSum(prop, vecBuf, Tp);
result.resize(vecBuf.size());
for (unsigned int t = 0; t < vecBuf.size(); ++t)
Result result;
TComplex site;
std::vector<int> siteCoor;
LOG(Message) << "Saving momentum-projected propagator to '"
<< RESULT_FILE_NAME(par().output) << "'..."
<< std::endl;
result.projection.resize(par().outputMom.size());
result.lattice_size = env().getGrid()->_fdimensions;
result.mass = par().mass;
result.charge = q;
siteCoor.resize(env().getNd());
for (unsigned int i_p = 0; i_p < par().outputMom.size(); ++i_p)
{
result[t] = TensorRemove(vecBuf[t]);
result.projection[i_p].momentum = strToVec<int>(par().outputMom[i_p]);
LOG(Message) << "Calculating (" << par().outputMom[i_p]
<< ") momentum projection" << std::endl;
result.projection[i_p].corr_0.resize(env().getGrid()->_fdimensions[env().getNd()-1]);
result.projection[i_p].corr.resize(env().getGrid()->_fdimensions[env().getNd()-1]);
result.projection[i_p].corr_Q.resize(env().getGrid()->_fdimensions[env().getNd()-1]);
result.projection[i_p].corr_Sun.resize(env().getGrid()->_fdimensions[env().getNd()-1]);
result.projection[i_p].corr_Tad.resize(env().getGrid()->_fdimensions[env().getNd()-1]);
for (unsigned int j = 0; j < env().getNd()-1; ++j)
{
siteCoor[j] = result.projection[i_p].momentum[j];
}
for (unsigned int t = 0; t < result.projection[i_p].corr.size(); ++t)
{
siteCoor[env().getNd()-1] = t;
peekSite(site, prop, siteCoor);
result.projection[i_p].corr[t]=TensorRemove(site);
peekSite(site, buf, siteCoor);
result.projection[i_p].corr_0[t]=TensorRemove(site);
peekSite(site, propQ, siteCoor);
result.projection[i_p].corr_Q[t]=TensorRemove(site);
peekSite(site, propSun, siteCoor);
result.projection[i_p].corr_Sun[t]=TensorRemove(site);
peekSite(site, propTad, siteCoor);
result.projection[i_p].corr_Tad[t]=TensorRemove(site);
}
}
write(writer, "charge", q);
write(writer, "prop", result);
saveResult(par().output, "prop", result);
}
std::vector<int> mask(env().getNd(),1);
mask[env().getNd()-1] = 0;
fft.FFT_dim_mask(prop, prop, mask, FFT::backward);
fft.FFT_dim_mask(propQ, propQ, mask, FFT::backward);
fft.FFT_dim_mask(propSun, propSun, mask, FFT::backward);
fft.FFT_dim_mask(propTad, propTad, mask, FFT::backward);
}
void TChargedProp::makeCaches(void)
{
auto &freeMomProp = envGet(ScalarField, freeMomPropName_);
auto &GFSrc = envGet(ScalarField, GFSrcName_);
auto &prop0 = envGet(ScalarField, prop0Name_);
auto &fft = envGet(FFT, fftName_);
if (!freeMomPropDone_)
{
LOG(Message) << "Caching momentum space free scalar propagator"
LOG(Message) << "Caching momentum-space free scalar propagator"
<< " (mass= " << par().mass << ")..." << std::endl;
SIMPL::MomentumSpacePropagator(freeMomProp, par().mass);
}
if (!GFSrcDone_)
{
FFT fft(env().getGrid());
auto &source = envGet(ScalarField, par().source);
LOG(Message) << "Caching G*F*src..." << std::endl;
fft.FFT_all_dim(GFSrc, source, FFT::forward);
GFSrc = freeMomProp*GFSrc;
}
if (!prop0Done_)
{
LOG(Message) << "Caching position-space free scalar propagator..."
<< std::endl;
fft.FFT_all_dim(prop0, GFSrc, FFT::backward);
}
if (!phasesDone_)
{
std::vector<int> &l = env().getGrid()->_fdimensions;
@ -185,6 +240,14 @@ void TChargedProp::makeCaches(void)
phase_.push_back(&phmu);
}
}
else
{
phase_.clear();
for (unsigned int mu = 0; mu < env().getNd(); ++mu)
{
phase_.push_back(env().getObject<ScalarField>(phaseName_[mu]));
}
}
}
void TChargedProp::momD1(ScalarField &s, FFT &fft)

View File

@ -7,6 +7,7 @@ Source file: extras/Hadrons/Modules/MScalar/ChargedProp.hpp
Copyright (C) 2015-2018
Author: Antonin Portelli <antonin.portelli@me.com>
Author: James Harrison <j.harrison@soton.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@ -47,7 +48,8 @@ public:
std::string, source,
double, mass,
double, charge,
std::string, output);
std::string, output,
std::vector<std::string>, outputMom);
};
class TChargedProp: public Module<ChargedPropPar>
@ -56,11 +58,31 @@ public:
SCALAR_TYPE_ALIASES(SIMPL,);
typedef PhotonR::GaugeField EmField;
typedef PhotonR::GaugeLinkField EmComp;
class Result: Serializable
{
public:
class Projection: Serializable
{
public:
GRID_SERIALIZABLE_CLASS_MEMBERS(Projection,
std::vector<int>, momentum,
std::vector<Complex>, corr,
std::vector<Complex>, corr_0,
std::vector<Complex>, corr_Q,
std::vector<Complex>, corr_Sun,
std::vector<Complex>, corr_Tad);
};
GRID_SERIALIZABLE_CLASS_MEMBERS(Result,
std::vector<int>, lattice_size,
double, mass,
double, charge,
std::vector<Projection>, projection);
};
public:
// constructor
TChargedProp(const std::string name);
// destructor
virtual ~TChargedProp(void) = default;
virtual ~TChargedProp(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -74,13 +96,15 @@ private:
void momD1(ScalarField &s, FFT &fft);
void momD2(ScalarField &s, FFT &fft);
private:
bool freeMomPropDone_, GFSrcDone_, phasesDone_;
std::string freeMomPropName_, GFSrcName_, fftName_;
bool freeMomPropDone_, GFSrcDone_, prop0Done_,
phasesDone_;
std::string freeMomPropName_, GFSrcName_, prop0Name_,
propQName_, propSunName_, propTadName_, fftName_;
std::vector<std::string> phaseName_;
std::vector<ScalarField *> phase_;
};
MODULE_REGISTER_NS(ChargedProp, TChargedProp, MScalar);
MODULE_REGISTER(ChargedProp, TChargedProp, MScalar);
END_MODULE_NAMESPACE

View File

@ -83,8 +83,6 @@ void TFreeProp::execute(void)
if (!par().output.empty())
{
TextWriter writer(par().output + "." +
std::to_string(vm().getTrajectory()));
std::vector<TComplex> buf;
std::vector<Complex> result;
@ -94,6 +92,6 @@ void TFreeProp::execute(void)
{
result[t] = TensorRemove(buf[t]);
}
write(writer, "prop", result);
saveResult(par().output, "freeprop", result);
}
}

View File

@ -56,7 +56,7 @@ public:
// constructor
TFreeProp(const std::string name);
// destructor
virtual ~TFreeProp(void) = default;
virtual ~TFreeProp(void) {};
// dependency relation
virtual std::vector<std::string> getInput(void);
virtual std::vector<std::string> getOutput(void);
@ -70,7 +70,7 @@ private:
bool freePropDone_;
};
MODULE_REGISTER_NS(FreeProp, TFreeProp, MScalar);
MODULE_REGISTER(FreeProp, TFreeProp, MScalar);
END_MODULE_NAMESPACE

Some files were not shown because too many files have changed in this diff Show More