1
0
mirror of https://github.com/paboyle/Grid.git synced 2025-06-17 15:27:06 +01:00

Compare commits

..

242 Commits

Author SHA1 Message Date
3064c9a6e2 Improve the matching of stencil coarsening 2020-09-08 15:36:58 -04:00
729882827c Improve the coarse matrix calc 2020-09-08 15:36:33 -04:00
baa668d3ac Merge branch 'develop' into feature/hw-multigrid
Conflicts:
	Grid/allocator/MemoryManager.h
2020-09-03 22:16:50 -04:00
3c82d16ed8 4D multigrid 2020-09-03 22:11:17 -04:00
5c8c0c2d7c BiCG 2020-09-03 22:10:40 -04:00
e5a100846c Useful tthing to preserve 2020-09-03 22:09:57 -04:00
a74e2dc12e Printing mem info 2020-09-03 22:08:44 -04:00
595f512a6e G5 for coarse space too 2020-09-03 22:07:40 -04:00
a6499b22ff Stats printing 2020-09-03 22:00:46 -04:00
b4e42a59c6 Stats traacking improvement 2020-09-03 22:00:14 -04:00
8c913e0edd Clearer UVM ttreatment 2020-09-03 21:59:05 -04:00
fd3f93d8d3 Zero changes 2020-09-03 21:57:11 -04:00
e9543cdacd Time deflation 2020-09-03 21:56:02 -04:00
98f7b3d298 Pcg 2020-09-03 21:55:05 -04:00
b7b164ea24 Test operator and deebug code 2020-09-03 21:54:20 -04:00
77124d99d5 Merge branch 'develop' into feature/hw-multigrid 2020-09-03 21:52:04 -04:00
65b724bb5f 2 level hddcr 2020-09-03 21:46:43 -04:00
6dbd117aa5 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2020-09-03 20:30:49 -04:00
198b29f618 Merge branch 'develop' of https://github.com/paboyle/Grid into develop 2020-09-03 20:29:54 -04:00
a8309638d4 UVM check in MPI calls 2020-09-03 20:29:26 -04:00
f98a4e880e Merge pull request #310 from kostrzewa/accelerator_vector_stream_op_no_backspace
do not use backspace in AcceleratorVector (Coordinate) output stream operator
2020-09-03 20:24:59 -04:00
8244caff25 Remove the asynchronous non-Stencil calls. 2020-09-03 18:52:55 -04:00
bcd7895362 Include cuda.h 2020-09-03 15:49:13 -04:00
85b1c5df39 A never hit case that is not 100% confident is asserted for safety 2020-09-03 15:48:16 -04:00
b4255140d6 Stale data member eliminated 2020-09-03 15:47:46 -04:00
0c3095e173 Comms buffers to device memory 2020-09-03 15:45:35 -04:00
d3ce60713d UVM, Device and Lattice/aligned allocators 2020-09-03 15:44:13 -04:00
eac1f08b7b Close expressions passed as an argument 2020-09-01 15:30:33 -04:00
1654c4f3c0 Closure improved 2020-09-01 15:29:45 -04:00
8807d998bc closure improved 2020-09-01 15:29:11 -04:00
5791021dcd Speed up Cshift more with coalesced 2020-09-01 15:28:15 -04:00
c273fb051c Peek poke laattice 2020-09-01 15:27:59 -04:00
c545530170 little worry large Nbasis doesnt compile GPU 2020-09-01 00:14:33 -04:00
d982a5b6d5 Fix coaarsened 2020-09-01 00:14:04 -04:00
15ca8637f3 No norms in HermOp 2020-09-01 00:13:32 -04:00
cbc995b74c Made better interface 2020-09-01 00:12:54 -04:00
8b74174d74 Eigen tensor serialisatiino happy undeer GPU. Regret agreeing to let us couple Eigen types to Grid IO 2020-09-01 00:03:26 -04:00
e21fef17df real and imag part not in ET 2020-08-31 23:56:26 -04:00
3d27708f07 Basic where test 2020-08-31 23:55:49 -04:00
b918744184 Prettificatoin 2020-08-31 23:54:46 -04:00
7d14a3c086 Where working 2020-08-31 23:53:46 -04:00
e14a84317d GPU math unary calls 2020-08-31 23:50:49 -04:00
6c31b99f1f I knew coupling Eigen Tensor to Grid serialisation was a bad iddea.
Now the complex is different on GPU creates probblems
2020-08-31 23:49:19 -04:00
9522dcd611 Remove dead commented ouot coode 2020-08-31 23:40:29 -04:00
ed469898dc coalesced ET expressions 2020-08-31 23:38:40 -04:00
1eee94a809 Sorting real/im in read coalesced GPU ET 2020-08-31 23:36:49 -04:00
54523369a3 do not use backspace in Coordinate output stream operator 2020-08-31 19:39:36 +02:00
a98c91c2a5 Merge pull request #309 from kostrzewa/format_benchmark_wilson_sweep
Format benchmark wilson sweep
2020-08-31 12:43:46 -04:00
a9b92867a8 use tabulator 2020-08-31 18:41:17 +02:00
65920faeba correct formatting of Benchmark_wilson_sweep output 2020-08-31 18:39:27 +02:00
3448b7387c Almost there to coalesced ET 2020-08-26 17:04:49 -04:00
47b89d2739 Pragma protection improvementt 2020-08-26 17:04:27 -04:00
1efe30d6cc SLurm stop nodes using same GPU 2020-08-21 02:02:53 +02:00
0b787e9fe0 Avoid namespaec collision to make gcc happy 2020-08-20 22:23:29 +02:00
37ec4b241c Default thread count sensible 2020-08-20 22:12:31 +02:00
90ea7dfa99 Accelerator loops for device resident comms buf 2020-08-19 22:40:44 +02:00
f866d7c33e Merge pull request #307 from lehner/feature/gpt
Merged Nils's A64FX and minor fixes (MemoryManager::InitMessage, Tensor_index zeroit, ...)
2020-08-18 23:27:21 -04:00
542bdef198 cleanup comments 2020-08-14 18:39:44 +02:00
06007db3d9 true shm_none implementation with GPUs that disables the use of device shared memory for the stencils 2020-08-14 18:37:00 +02:00
12e6059a70 Merge branch 'feature/gpt' of https://github.com/lehner/Grid into feature/gpt 2020-08-13 16:16:52 +02:00
dbaa24ebf6 further GPU memory access fixes (with this GPT passes all single-rank tests on non-summit GPUs) 2020-08-13 16:14:15 +02:00
3276aa67dc Update 2020-08-12 14:15:53 -04:00
3b30b9f0c0 Merge branch 'feature/gpt' of https://github.com/lehner/Grid into feature/gpt 2020-08-06 16:59:17 +02:00
69db4816f7 fix variable capture in Scatter_plane_merge on accelerators 2020-08-06 16:57:16 +02:00
3abe09025a when using SHM_NONE allow multiple ranks per node but without using shared memory 2020-08-06 14:42:38 +02:00
e33878e0de Trigger re-run of CI 2020-08-06 11:50:24 +02:00
27b4fbf3f0 assert for forbidden code path and fix check for faster CPU codepath in basisRotate 2020-08-03 07:57:33 -04:00
968a90633a Zero -> zeroit in Tensor_index 2020-07-31 02:07:17 -04:00
6365a89ba3 create separate InitMessage for MemoryManager that can be called after communicator setup 2020-07-30 07:25:05 -04:00
ddbb008694 Merge pull request #10 from lehner/feature/gpt-sycl
Feature/gpt sycl
2020-07-30 13:12:09 +02:00
7997e0a449 Merge branch 'feature/gpt' into feature/gpt-sycl 2020-07-30 13:11:31 +02:00
197612bc7a fast cpu basisRotate and other small cleanups 2020-07-30 07:08:54 -04:00
0e88bf4bff remove Nils's default pragma 2020-07-29 10:24:35 -04:00
3e64d78469 include versions.h again and add back asserts in Test_simd 2020-07-29 10:18:05 -04:00
2004611def Merge pull request #9 from nmeyer-ur/feature/a64fx-2
Feature/a64fx 2
2020-07-29 14:54:20 +02:00
a2868c96a4 Merge pull request #8 from paboyle/develop
Doc recompile
2020-07-29 14:10:07 +02:00
ea7f8fda5e fix typo 2020-07-22 09:34:05 +02:00
906b78811b exit in Init when using --comms-overlap 2020-07-22 08:57:01 +02:00
e1327e7ea0 Optional bounds check debug code 2020-07-16 16:57:46 -04:00
569f78c2cf Stenccil improvement 2020-07-16 16:57:13 -04:00
488c79d5a1 Bound improvement 2020-07-15 19:58:08 -04:00
97703b181b Merge pull request #7 from paboyle/develop
Merge current develop
2020-07-12 16:24:53 +02:00
d9474c6cb6 compiler-independent build using --enable-simd=A64FX 2020-07-09 10:07:02 +02:00
bbd145382b enable --enable-simd=A64FX in configure 2020-07-08 12:43:51 +02:00
1b08cb7300 Merge branch 'develop' into feature/a64fx-2 2020-07-08 08:18:18 +02:00
337d9dc043 move barrier in Benchmark_wilson 2020-07-08 08:13:40 +02:00
8726e94ea7 merge upstream develop 2020-07-07 20:26:47 +02:00
67db4993c2 reset head, update SVE readme 2020-07-07 19:54:52 +02:00
dc6b0f20b2 Fixed array bounds 2020-07-02 12:20:20 -04:00
c0badc3e16 Summit bounce back to git 2020-07-02 10:48:39 -04:00
fd3c8b0e85 correct build instructions qp4 2020-07-01 09:00:38 +02:00
58f6529b55 Slowly piecing together 2020-06-30 16:42:03 -04:00
e3f056dfbb Hw multigrid operator 2020-06-30 16:10:16 -04:00
da0ffa7a79 Two spin update defer commit to repository 2020-06-30 16:09:48 -04:00
fcc7640b9c Detect a coarsened vector 2020-06-30 14:17:45 -04:00
0cbe2859e0 Making progress on Hw based 5d coarse matrix 2020-06-30 14:17:20 -04:00
1635c263ee disable TOFU by default 2020-06-30 19:27:08 +02:00
5b117865b2 Merge pull request #6 from paboyle/sycl
Sycl
2020-06-20 09:44:44 +02:00
05bbc49a99 Edge case in GetShmDim check 2020-06-19 12:01:23 -04:00
81a8209749 ConvertType for blockInnerProduct 2020-06-18 11:53:21 -04:00
a87e45ba25 SVE readme update 2020-06-18 11:23:08 +02:00
465856331a switch back to serialized; wrong results on single too 2020-06-15 15:39:39 +02:00
cc958aa9ed switch back to standard MPI_init due to wrong results in Benchmark_wilson using comms-overlap 2020-06-15 14:21:38 +02:00
a25e4b3d0c pred 32/64 for float/double instead of 8 in VLA patch 2020-06-13 14:44:37 +02:00
d1210ca12a switch to double/float instead of float64_t/float32_t in VLA patch 2020-06-13 13:59:32 +02:00
36ea0e222a type traits for ComplexF/D in VLA patch; cosmetics in VLS intrinsics 2020-06-13 13:42:35 +02:00
92281ec22d add 3 op Mult for VLA 2020-06-12 18:49:05 +02:00
87266ce099 comment out fcmla in vector types: need also MultAddReal 2020-06-12 18:37:19 +02:00
2a23f133e8 reenable fcmla for VLA 2020-06-12 17:30:38 +02:00
8dbf790f62 correct tbl2 for sp 2020-06-12 17:12:34 +02:00
2402b4940e vec_imm in float 2020-06-12 15:17:38 +02:00
2111052fbe apply VLA patch for memcpy reduction suggested by Arm, CAS-162542-D6W7Z7 2020-06-12 14:49:19 +02:00
433766ac62 revert Add/SubTimesI and prefetching in stencil
This reverts commit 9b2699226c.
2020-06-08 12:02:53 +02:00
93a37c8f68 test prefetch to L2 in stencil 2020-06-08 09:39:50 +02:00
9872c76825 introduce AddTimesI and SubTimesI; slight benefit in operators, but < 1%; breaks all other impls 2020-06-03 15:20:13 +02:00
5ee3ea2144 round-up after testing of prefetches in stencil close 2020-06-03 11:58:20 +02:00
5050833b42 revert changes due to performance penalty in Wilson using MPI 2020-06-02 13:08:57 +02:00
7bee4ebb54 correct predication for svcadd 2020-06-02 10:51:39 +02:00
71cf9851e7 correct type for vecd in TimesI and TimesMinusI 2020-06-02 10:44:15 +02:00
b4735c9904 correct zero in svcadd 2020-06-02 10:38:05 +02:00
9b2699226c use fcadd in TimesI and TimesMinusI instead of tbl and neg 2020-06-02 10:32:44 +02:00
5f52804907 update calculation of data 2020-05-30 10:55:17 +02:00
936071773e correct throughput in wilson and dwf 2020-05-29 22:15:59 +02:00
1732f9319e more mods; counters seem to work correctly 2020-05-29 18:44:00 +02:00
91c81cab30 some corrections; compiles on my laptop; untested 2020-05-29 18:19:22 +02:00
38164f8480 include counters in WilsonFermionImplementation.h 2020-05-29 17:59:26 +02:00
f013979791 add counter support in WilsonFermion.h 2020-05-29 17:13:59 +02:00
e947b563ea add space in stencil output 2020-05-29 17:11:17 +02:00
5cb3530c34 enable counters in Benchmark_wilson 2020-05-29 15:44:52 +02:00
250008372f update SVE readme 2020-05-29 15:44:25 +02:00
4fedd8d29f switch to MPI_THREAD_SERIALIZED instead of SINGLE 2020-05-27 14:08:34 +02:00
6ddcef1bca fix build error enabling fcmla/mac in vector types for VLA 2020-05-21 21:21:03 +02:00
8c5a5fdfce disable fcmla in vector type building for VLA 2020-05-21 19:41:42 +02:00
046b1cbbc0 enable fcmla in tensor arithmetics; fixed-size works, VLA does not compile 2020-05-21 19:39:07 +02:00
a65ce237c1 clean up; Exch1 VLA sp+dp integrate, tested, working 2020-05-21 09:48:06 +02:00
cd27f1005d clean up; Exch1 sp integrate, tested, working 2020-05-21 08:45:43 +02:00
f8c0a59221 clean up; Exch1 dp integrate, tested, working 2020-05-21 02:48:14 +02:00
832485699f save some cycles in HtoD and DtoH by direct instead of multi-pass conversion 2020-05-20 23:04:35 +02:00
81484a4760 symmetrize Mult and MultAddComplex 2020-05-20 22:36:45 +02:00
9a86059761 symmetrize VLA and fixed size build messages 2020-05-20 20:05:42 +02:00
b780b7b7a0 guard prevents multiple TOFU messages 2020-05-20 19:20:59 +02:00
9e085bd04e guard prevents multiple A64FX build messages 2020-05-20 19:16:30 +02:00
6b6bf537d3 comment out mac in vector types 2020-05-18 20:36:16 +02:00
323a651c71 correct typo 2020-05-18 19:58:27 +02:00
9f212679f1 support fcmla in vector_types, untested 2020-05-18 19:55:18 +02:00
032f7dde1a update SVE readme, asm generator 2020-05-18 19:10:36 +02:00
50b1db1e8b implemented correct _m form (using 3 operands instead of 2) 2020-05-15 10:01:05 +02:00
015d8bb38a introduced assertions in Benchmark_wilson, removed data output from Benchmark_dwf 2020-05-15 09:15:50 +02:00
10a34312dc some fixed-size code clean up 2020-05-14 23:20:16 +02:00
db8c0e7584 replaced _x form with _m form when using even/odd predication 2020-05-14 23:17:35 +02:00
d15ccad8a7 switched to vec* in Reduce 2020-05-12 20:41:14 +02:00
0009b5cee8 updated SVE_README 2020-05-12 19:02:33 +02:00
20d1941a45 enabled asm kernels for fixed-size A64FXFIXEDSIZE 2020-05-12 19:01:12 +02:00
b7c76ede29 Removed some assertions in Test_simd and removed exit() in Reduce 2020-05-11 22:43:00 +02:00
05edf803bd corrected typo 2020-05-12 03:59:59 +09:00
78b8e40f83 switched to gcc's internal data types 2020-05-11 18:11:23 +02:00
fc2e9850d3 temporarily enable TOFU by default when using A64FX or A64FXFIXEDSIZE 2020-05-11 13:25:02 +02:00
ffaaed679e MPI_THREAD_SINGLE hack for Fugaku, enabled by -DTOFU 2020-05-11 13:21:39 +02:00
b2fd8b993a fixed-size clean up 2020-05-09 22:53:42 +02:00
291ee8c3d0 updated fixed-size implementation; only Exch1 and prefetches missing 2020-05-09 22:18:02 +02:00
e1a5b3ea49 unions for tables eliminate explicit loads, gcc does not complain 2020-05-09 21:21:57 +02:00
55a55660cb reverted changes 2020-05-09 12:48:42 +02:00
ceb8b374da API change v3 2020-05-08 15:04:44 +02:00
4bc2ad2894 API change v2 2020-05-08 15:00:25 +02:00
798af3e68f retry changing StoD API 2020-05-08 14:34:59 +02:00
b0ef2367f3 testing alternate call to PrecisionChange 2020-05-08 14:22:44 +02:00
71a7350a85 changed 2nd argument in Reduce to native vector type 2020-05-08 12:26:51 +02:00
6f79369955 trying to get rid of macro definition error 2020-05-08 12:19:24 +02:00
f9cb6b979f corrected more typos 2020-05-08 12:11:01 +02:00
ed4d9d17f8 corrected type 2020-05-08 12:09:22 +02:00
fbed02690d some changes in breaking out A64FX: use -DA64FXFIXEDSIZE for fixed size, but also define GEN 2020-05-08 12:05:31 +02:00
39f3ae5b1d corrected more types 2020-05-08 11:07:14 +02:00
e64bec8c8e pulled SVE typedefs out of Optimization 2020-05-08 11:04:21 +02:00
0893b4e552 fixed typos in PrecisionChange 2020-05-08 10:59:07 +02:00
92f0f29670 fixed double overloading vecf in Div, corrected typos 2020-05-08 10:57:23 +02:00
48a340a9d1 GEN seems to defined by default -> some fixes applied 2020-05-08 10:47:49 +02:00
f45621109b placed typedefs in Optimization 2020-05-08 10:41:52 +02:00
32d1a0bbea added even more debug output 2020-05-08 10:39:26 +02:00
267cce66a1 added more debug output 2020-05-08 10:29:28 +02:00
3417147b11 added real fma, corrected typos in tbls; integrated, must supply A64FXGCC with GEN in configure 2020-05-08 10:20:19 +02:00
b338719bc8 first transition to fixed-size done, excl. Exch; next step: integration 2020-05-07 22:33:28 +02:00
2b81cbe2c2 first attempt to introduce tables using fixed-size; still incomplete 2020-05-07 22:01:19 +02:00
acff9d6ed2 transition to fixed size data types almost done; still incomplete 2020-05-07 21:24:07 +02:00
a306a49788 first mods for fixed size; still incomplete 2020-05-07 19:07:49 +02:00
7ef03c5368 updated SVE readme 2020-05-06 16:30:37 +02:00
5abec5b8a9 SVE_readme update, update Grid_vector_types.h 2020-04-25 13:48:26 +02:00
499edc0636 updated SVE_README.txt; defined ARMCLANGCOMPAT macro 2020-04-25 13:41:24 +02:00
d990e61be3 armclang 20.1 settings in SVE readme 2020-04-25 12:11:43 +02:00
3edb2dc2da removed -static from gcc CXXFLAGS 2020-04-24 13:04:34 +02:00
345721220e resolved merge conflict 2020-04-24 10:14:21 +02:00
6db68d6ecb added SVE configure for armclang and gcc 2020-04-24 10:10:47 +02:00
09f0963d1f changes in configure.ac ; to be verified 2020-04-23 11:27:03 +02:00
6f44e3c192 reverted changes in configure.ac ; included SVE configure readme 2020-04-23 11:18:50 +02:00
5893888f87 removed default no-strict-aliasing for gcc-10.0.1 exclusively 2020-04-22 19:29:55 +02:00
39b448affb Merge remote-tracking branch 'origin/develop' into feature/a64fx-2 2020-04-22 17:34:12 +02:00
e54a8f05a9 Exchange1 with generic version for now, should use svtbl2 in final version 2020-04-20 22:45:27 +02:00
64b72fc17f testing gcc 10.0.1: build errors in Exchange1 using -DA64FX and in Lattice_base.h building Dslash only 2020-04-19 01:25:40 +02:00
6fdce60492 revised BodyA64FX; 990 GiB/s Wilson, 687 GiB/s DW using intrinsics (armclang 20.0) 2020-04-16 22:43:32 +02:00
852db4626a re-introduced HOTFIX cause Grid binaries give wrong results otherwise; checked in good gridverter.py 2020-04-15 18:22:19 +02:00
6504a098cc 999 GiB/s Wilson; 694 GiB/s DW (DP) 2020-04-15 15:06:52 +02:00
79a385faca disabled armclang hotfix cause armclang 20.0 performance gets a little 2020-04-15 11:46:55 +02:00
c12a67030a 980 GiB/s Wilson; 680 GiB/s DW (DP) 2020-04-15 10:55:06 +02:00
581392f2f2 now with pf, best results so far using intrinsics+pf 2020-04-12 22:06:14 +02:00
113f277b6a enable dslash asm using -DA64FXASM, additionaly -DDSLASHINTRIN for intrinsics impl 2020-04-11 04:55:01 +02:00
974586bedc Dslash finally works; cleaned up; uses MOVPRFX in assembly 2020-04-10 22:26:40 +02:00
160f78c1e4 changed debug output to variable direct 3 2020-04-10 12:23:07 +02:00
7e4e1bbbc2 changed debug output to variable direct 2 2020-04-10 12:22:04 +02:00
e699b7e9f9 changed debug output to variable direct 2020-04-10 12:18:30 +02:00
a28bc0de90 debug register address test in WilsonHand 2020-04-10 12:07:45 +02:00
14d0fe4d6c added predication in WilsonHand 2020-04-10 12:04:00 +02:00
0ad2e0815c debug output in WilsonHand 2020-04-10 11:56:29 +02:00
1c8ca05e16 Merge branch 'feature/a64fx-2' of https://github.com/nmeyer-ur/Grid into feature/a64fx-2 2020-04-09 23:32:19 +02:00
dc9c8340bb switched to DSLASHINTRIN for A64FX Dslash intrinsics 2020-04-09 23:30:23 +02:00
19eef97503 specialized A64FX Dslash kernels 2020-04-09 23:25:25 +02:00
635246ce50 corrected typo 2020-04-09 21:42:50 +02:00
5cdbb7e71e fixed A64FX Dslash; compiles, but does not specialize -> assertion 2020-04-09 21:23:39 +02:00
8123590a1b changes 2020-04-09 16:45:47 +02:00
86c9c4da8b changes 2020-04-09 16:40:06 +02:00
cd1efee866 changes 2020-04-09 16:35:13 +02:00
bd310932f7 changes 2020-04-09 16:32:31 +02:00
304762e7ac changes 2020-04-09 16:26:01 +02:00
d79ab03a6c changes 2020-04-09 16:19:25 +02:00
d5708e0eb2 more changes 2020-04-09 15:43:34 +02:00
123f6b7a61 more changes 2020-04-09 15:17:19 +02:00
2b6457dd9a added xp/xm recon accum 2020-04-09 15:13:19 +02:00
b367cbd422 defined ADD_RESULT 2020-04-09 15:08:45 +02:00
e252c1aca3 addressing 2020-04-09 15:03:12 +02:00
b140c6a4f9 addressing 2020-04-09 15:01:15 +02:00
326de36467 revised sU addressing scheme 2020-04-09 14:44:25 +02:00
9f224a1647 fixed typo in single 2020-04-09 14:30:21 +02:00
bb46ba9b5f fixed array size in single 2020-04-09 14:28:45 +02:00
dd5a22b36b revised declarations 2020-04-09 14:21:27 +02:00
1ea85b9972 Disabled build message 2020-04-09 13:47:21 +02:00
8fb63f1c25 added A64FX Wilson kernels single precision 2020-04-09 13:41:04 +02:00
77fa586f6c introduced A64FX Wilson kernels 2020-04-09 13:30:06 +02:00
15238e8d5e reduce acle works, clean up 2020-04-03 20:40:44 +02:00
b27e31957a reduce acle revised 2020-04-03 19:46:15 +02:00
46927771e3 reduce acle still needs overhaul 2020-04-03 19:30:48 +02:00
d8cea77707 define simd width in header 2020-04-03 19:22:25 +02:00
5f8a76d490 clean up, reduction in acle 2020-04-03 19:18:24 +02:00
28d49a3b60 build problem resolved 2020-04-03 16:52:48 +02:00
b4c624ece6 added A64FX support 2020-04-03 15:43:23 +02:00
101 changed files with 12800 additions and 1775 deletions

View File

@ -42,7 +42,7 @@
#ifdef __NVCC__REDEFINE__ #ifdef __NVCC__REDEFINE__
#pragma pop_macro("__CUDACC__") #pragma pop_macro("__CUDACC__")
#pragma pop_macro("__NVCC__") #pragma pop_macro("__NVCC__")
#pragma pop_macro("GRID_SIMT") #pragma pop_macro("__CUDA_ARCH__")
#pragma pop #pragma pop
#endif #endif

View File

@ -49,13 +49,11 @@ inline void blockMaskedInnerProduct(Lattice<CComplex> &CoarseInner,
Lattice<dotp> fine_inner_msk(fine); Lattice<dotp> fine_inner_msk(fine);
// Multiply could be fused with innerProduct // Multiply could be fused with innerProduct
// Single block sum kernel could do both masks.
fine_inner = localInnerProduct(fineX,fineY); fine_inner = localInnerProduct(fineX,fineY);
mult(fine_inner_msk, fine_inner,FineMask); mult(fine_inner_msk, fine_inner,FineMask);
blockSum(CoarseInner,fine_inner_msk); blockSum(CoarseInner,fine_inner_msk);
} }
class Geometry { class Geometry {
public: public:
int npoint; int npoint;
@ -80,8 +78,12 @@ public:
} }
directions [2*_d]=0; directions [2*_d]=0;
displacements[2*_d]=0; displacements[2*_d]=0;
}
std::cout <<GridLogMessage << "Geometry "<<std::endl;
for(int p=0;p<npoint;p++){
std::cout <<GridLogMessage << "point " <<p<<" dir "<<directions[p]<<" delta " <<displacements[p]<<std::endl;
}
}
}; };
template<class Fobj,class CComplex,int nbasis> template<class Fobj,class CComplex,int nbasis>
@ -102,8 +104,8 @@ public:
Aggregation(GridBase *_CoarseGrid,GridBase *_FineGrid,int _checkerboard) : Aggregation(GridBase *_CoarseGrid,GridBase *_FineGrid,int _checkerboard) :
CoarseGrid(_CoarseGrid), CoarseGrid(_CoarseGrid),
FineGrid(_FineGrid), FineGrid(_FineGrid),
subspace(nbasis,_FineGrid), checkerboard(_checkerboard),
checkerboard(_checkerboard) subspace(nbasis,_FineGrid)
{ {
}; };
@ -285,6 +287,8 @@ public:
/////////////////////// ///////////////////////
GridBase * Grid(void) { return _grid; }; // this is all the linalg routines need to know GridBase * Grid(void) { return _grid; }; // this is all the linalg routines need to know
virtual std::vector<int> Directions(void) { return geom.directions; };
virtual std::vector<int> Displacements(void){ return geom.displacements; };
void M (const CoarseVector &in, CoarseVector &out) void M (const CoarseVector &in, CoarseVector &out)
{ {
conformable(_grid,in.Grid()); conformable(_grid,in.Grid());
@ -308,6 +312,9 @@ public:
int osites=Grid()->oSites(); int osites=Grid()->oSites();
autoView(st,Stencil,AcceleratorRead);
siteVector *CBp=Stencil.CommBuf();
accelerator_for(sss, Grid()->oSites()*nbasis, Nsimd, { accelerator_for(sss, Grid()->oSites()*nbasis, Nsimd, {
int ss = sss/nbasis; int ss = sss/nbasis;
int b = sss%nbasis; int b = sss%nbasis;
@ -318,12 +325,12 @@ public:
for(int point=0;point<geom.npoint;point++){ for(int point=0;point<geom.npoint;point++){
SE=Stencil.GetEntry(ptype,point,ss); SE=st.GetEntry(ptype,point,ss);
if(SE->_is_local) { if(SE->_is_local) {
nbr = coalescedReadPermute(in_v[SE->_offset],ptype,SE->_permute); nbr = coalescedReadPermute(in_v[SE->_offset],ptype,SE->_permute);
} else { } else {
nbr = coalescedRead(Stencil.CommBuf()[SE->_offset]); nbr = coalescedRead(CBp[SE->_offset]);
} }
acceleratorSynchronise(); acceleratorSynchronise();
@ -409,38 +416,23 @@ public:
MdirCalc(in,out[p],p); MdirCalc(in,out[p],p);
} }
}; };
void Mdir(const CoarseVector &in, CoarseVector &out, int dir, int disp){ void Mdir(const CoarseVector &in, CoarseVector &out, int dir, int disp)
{
this->MdirComms(in); this->MdirComms(in);
int ndim = in.Grid()->Nd(); int ndim = in.Grid()->Nd();
////////////// int point=-1;
// 4D action like wilson for(int p=0;p<geom.npoint;p++){
// 0+ => 0 if( (dir==geom.directions[p])&&(disp==geom.displacements[p])) point=p;
// 0- => 1
// 1+ => 2
// 1- => 3
// etc..
//////////////
// 5D action like DWF
// 1+ => 0
// 1- => 1
// 2+ => 2
// 2- => 3
// etc..
auto point = [dir, disp, ndim](){
if(dir == 0 and disp == 0)
return 8;
else if ( ndim==4 ) {
return (4 * dir + 1 - disp) / 2;
} else {
return (4 * (dir-1) + 1 - disp) / 2;
} }
}(); assert(point!=-1);// Must find
std::cout <<GridLogMessage << "Mdir point "<<point<<" dir "<<dir<<" disp "<<disp <<std::endl;
for(int p=0;p<geom.npoint;p++){
std::cout <<GridLogMessage << "point " <<p<<" dir "<<geom.directions[p]<<" delta " <<geom.displacements[p]<<std::endl;
}
MdirCalc(in,out,point); MdirCalc(in,out,point);
}; };
void Mdiag(const CoarseVector &in, CoarseVector &out) void Mdiag(const CoarseVector &in, CoarseVector &out)
@ -460,6 +452,54 @@ public:
{ {
}; };
void Test(Aggregation<Fobj,CComplex,nbasis> &_Aggregates,GridBase *FineGrid,LinearOperatorBase<Lattice<Fobj> > &linop)
{
typedef Lattice<Fobj> FineField;
CoarseVector Cin(_grid);
CoarseVector Cout(_grid);
CoarseVector CFout(_grid);
FineField Fin(FineGrid);
FineField Fout(FineGrid);
std::vector<int> seeds({1,2,3,4,5});
GridParallelRNG RNG(_grid); RNG.SeedFixedIntegers(seeds);
gaussian(RNG,Cin);
_Aggregates.PromoteFromSubspace(Cin,Fin);
_Aggregates.ProjectToSubspace(Cin,Fin);
std::cout << GridLogMessage<< "************ "<<std::endl;
std::cout << GridLogMessage<< " Testing M "<<std::endl;
std::cout << GridLogMessage<< "************ "<<std::endl;
// Coarse operator
this->M(Cin,Cout);
// Fine projected operator
_Aggregates.PromoteFromSubspace(Cin,Fin);
linop.Op(Fin,Fout);
_Aggregates.ProjectToSubspace(CFout,Fout);
CFout = CFout-Cout;
RealD diff = norm2(CFout);
std::cout << GridLogMessage<< " diff "<<diff<<std::endl;
assert(diff<1.0e-5);
std::cout << GridLogMessage<< "************ "<<std::endl;
std::cout << GridLogMessage<< " Testing Mdag "<<std::endl;
std::cout << GridLogMessage<< "************ "<<std::endl;
// Coarse operator
Mdag(Cin,Cout);
// Fine operator
linop.AdjOp(Fin,Fout);
_Aggregates.ProjectToSubspace(CFout,Fout);
CFout = CFout-Cout;
diff = norm2(CFout);
std::cout << GridLogMessage<< " diff "<<diff<<std::endl;
assert(diff<1.0e-5);
}
void CoarsenOperator(GridBase *FineGrid,LinearOperatorBase<Lattice<Fobj> > &linop, void CoarsenOperator(GridBase *FineGrid,LinearOperatorBase<Lattice<Fobj> > &linop,
Aggregation<Fobj,CComplex,nbasis> & Subspace) Aggregation<Fobj,CComplex,nbasis> & Subspace)
{ {
@ -496,8 +536,19 @@ public:
CoarseScalar InnerProd(Grid()); CoarseScalar InnerProd(Grid());
std::cout << GridLogMessage<< "CoarsenMatrix Orthog " << std::endl;
// Orthogonalise the subblocks over the basis // Orthogonalise the subblocks over the basis
blockOrthogonalise(InnerProd,Subspace.subspace); blockOrthogonalise(InnerProd,Subspace.subspace);
std::cout << GridLogMessage<< "CoarsenMatrix Orthog done " << std::endl;
auto OpDirections = linop.Directions();
auto OpDisplacements = linop.Displacements();
std::cout<<" Coarsening an operator with "<< OpDirections.size()<<" terms "<<std::endl;
for(int p=0;p<OpDirections.size();p++) {
assert(OpDirections[p]==geom.directions[p]);
assert(OpDisplacements[p]==geom.displacements[p]);
}
// Compute the matrix elements of linop between this orthonormal // Compute the matrix elements of linop between this orthonormal
// set of vectors. // set of vectors.
@ -533,13 +584,27 @@ public:
evenmask = where(mod(bcb,2)==(Integer)0,one,zero); evenmask = where(mod(bcb,2)==(Integer)0,one,zero);
oddmask = one-evenmask; oddmask = one-evenmask;
/*
{
phi=Subspace.subspace[0];
linop.OpDirAll(phi,Mphi_p);
for(int p=0;p<geom.npoint-1;p++){
int dir=geom.directions[p];
int disp=geom.displacements[p];
linop.OpDir(phi,Mphi,dir,disp);
Mphi=Mphi-Mphi_p[p];
std::cout << GridLogMessage <<" Direction mapping check " <<norm2(Mphi)<<std::endl;
}
}
*/
assert(self_stencil!=-1); assert(self_stencil!=-1);
int lhermitian=hermitian;
for(int i=0;i<nbasis;i++){ for(int i=0;i<nbasis;i++){
phi=Subspace.subspace[i]; phi=Subspace.subspace[i];
// std::cout << GridLogMessage<< "CoarsenMatrix vector "<<i << std::endl; std::cout << GridLogMessage<< "CoarsenMatrix vector "<<i << std::endl;
linop.OpDirAll(phi,Mphi_p); linop.OpDirAll(phi,Mphi_p);
linop.OpDiag (phi,Mphi_p[geom.npoint-1]); linop.OpDiag (phi,Mphi_p[geom.npoint-1]);
@ -550,7 +615,7 @@ public:
int dir = geom.directions[p]; int dir = geom.directions[p];
int disp = geom.displacements[p]; int disp = geom.displacements[p];
if ( (disp==-1) || (!hermitian ) ) { if ( (disp==-1) || (!lhermitian ) ) {
//////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////
// Pick out contributions coming from this cell and neighbour cell // Pick out contributions coming from this cell and neighbour cell
@ -568,11 +633,23 @@ public:
autoView( A_self , A[self_stencil], AcceleratorWrite); autoView( A_self , A[self_stencil], AcceleratorWrite);
accelerator_for(ss, Grid()->oSites(), Fobj::Nsimd(),{ coalescedWrite(A_p[ss](j,i),oZProj_v(ss)); }); accelerator_for(ss, Grid()->oSites(), Fobj::Nsimd(),{ coalescedWrite(A_p[ss](j,i),oZProj_v(ss)); });
if ( lhermitian && (disp==-1) ) {
for(int pp=0;pp<geom.npoint;pp++){// Find the opposite link and set <j|A|i> = <i|A|j>*
int dirp = geom.directions[pp];
int dispp = geom.displacements[pp];
if ( (dirp==dir) && (dispp==1) ){
auto sft = conjugate(Cshift(oZProj,dir,1));
autoView( sft_v , sft , AcceleratorWrite);
autoView( A_pp , A[pp], AcceleratorWrite);
accelerator_for(ss, Grid()->oSites(), Fobj::Nsimd(),{ coalescedWrite(A_pp[ss](i,j),sft_v(ss)); });
}
}
}
} }
} }
} }
std::cout << GridLogMessage<< "CoarsenMatrix Diag "<<std::endl;
/////////////////////////////////////////// ///////////////////////////////////////////
// Faster alternate self coupling.. use hermiticity to save 2x // Faster alternate self coupling.. use hermiticity to save 2x
/////////////////////////////////////////// ///////////////////////////////////////////
@ -604,31 +681,35 @@ public:
} }
} }
if(hermitian) {
std::cout << GridLogMessage << " ForceHermitian, new code "<<std::endl; MemoryManager::PrintBytes();
ForceHermitian();
// Auto self test
Test( Subspace,FineGrid,linop);
#if 0
///////////////////////////
// test code worth preserving in if block
///////////////////////////
std::cout<<GridLogMessage<< " Computed matrix elements "<< self_stencil <<std::endl;
for(int p=0;p<geom.npoint;p++){
std::cout<<GridLogMessage<< "A["<<p<<"]" << std::endl;
std::cout<<GridLogMessage<< "\n"<<A[p] << std::endl;
} }
std::cout<<GridLogMessage<< " picking by block0 "<< self_stencil <<std::endl;
phi=Subspace.subspace[0];
std::vector<int> bc(FineGrid->_ndimension,0);
blockPick(Grid(),phi,tmp,bc); // Pick out a block
linop.Op(tmp,Mphi); // Apply big dop
blockProject(iProj,Mphi,Subspace.subspace); // project it and print it
std::cout<<GridLogMessage<< " Computed matrix elements from block zero only "<<std::endl;
std::cout<<GridLogMessage<< iProj <<std::endl;
std::cout<<GridLogMessage<<"Computed Coarse Operator"<<std::endl;
#endif
} }
void ForceHermitian(void) {
CoarseMatrix Diff (Grid());
for(int p=0;p<geom.npoint;p++){
int dir = geom.directions[p];
int disp = geom.displacements[p];
if(disp==-1) {
// Find the opposite link
for(int pp=0;pp<geom.npoint;pp++){
int dirp = geom.directions[pp];
int dispp = geom.displacements[pp];
if ( (dirp==dir) && (dispp==1) ){
// Diff = adj(Cshift(A[p],dir,1)) - A[pp];
// std::cout << GridLogMessage<<" Replacing stencil leg "<<pp<<" with leg "<<p<< " diff "<<norm2(Diff) <<std::endl;
A[pp] = adj(Cshift(A[p],dir,1));
}
}
}
}
}
}; };
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -52,6 +52,9 @@ public:
virtual void AdjOp (const Field &in, Field &out) = 0; // Abstract base virtual void AdjOp (const Field &in, Field &out) = 0; // Abstract base
virtual void HermOpAndNorm(const Field &in, Field &out,RealD &n1,RealD &n2)=0; virtual void HermOpAndNorm(const Field &in, Field &out,RealD &n1,RealD &n2)=0;
virtual void HermOp(const Field &in, Field &out)=0; virtual void HermOp(const Field &in, Field &out)=0;
virtual std::vector<int> Directions(void) =0;
virtual std::vector<int> Displacements(void)=0;
}; };
@ -76,6 +79,9 @@ class MdagMLinearOperator : public LinearOperatorBase<Field> {
public: public:
MdagMLinearOperator(Matrix &Mat): _Mat(Mat){}; MdagMLinearOperator(Matrix &Mat): _Mat(Mat){};
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
// Support for coarsening to a multigrid // Support for coarsening to a multigrid
void OpDiag (const Field &in, Field &out) { void OpDiag (const Field &in, Field &out) {
_Mat.Mdiag(in,out); _Mat.Mdiag(in,out);
@ -111,6 +117,8 @@ class ShiftedMdagMLinearOperator : public LinearOperatorBase<Field> {
Matrix &_Mat; Matrix &_Mat;
RealD _shift; RealD _shift;
public: public:
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
ShiftedMdagMLinearOperator(Matrix &Mat,RealD shift): _Mat(Mat), _shift(shift){}; ShiftedMdagMLinearOperator(Matrix &Mat,RealD shift): _Mat(Mat), _shift(shift){};
// Support for coarsening to a multigrid // Support for coarsening to a multigrid
void OpDiag (const Field &in, Field &out) { void OpDiag (const Field &in, Field &out) {
@ -151,6 +159,8 @@ template<class Matrix,class Field>
class HermitianLinearOperator : public LinearOperatorBase<Field> { class HermitianLinearOperator : public LinearOperatorBase<Field> {
Matrix &_Mat; Matrix &_Mat;
public: public:
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
HermitianLinearOperator(Matrix &Mat): _Mat(Mat){}; HermitianLinearOperator(Matrix &Mat): _Mat(Mat){};
// Support for coarsening to a multigrid // Support for coarsening to a multigrid
void OpDiag (const Field &in, Field &out) { void OpDiag (const Field &in, Field &out) {
@ -182,6 +192,8 @@ template<class Matrix,class Field>
class NonHermitianLinearOperator : public LinearOperatorBase<Field> { class NonHermitianLinearOperator : public LinearOperatorBase<Field> {
Matrix &_Mat; Matrix &_Mat;
public: public:
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
NonHermitianLinearOperator(Matrix &Mat): _Mat(Mat){}; NonHermitianLinearOperator(Matrix &Mat): _Mat(Mat){};
// Support for coarsening to a multigrid // Support for coarsening to a multigrid
void OpDiag (const Field &in, Field &out) { void OpDiag (const Field &in, Field &out) {
@ -255,6 +267,8 @@ template<class Matrix,class Field>
class SchurDiagMooeeOperator : public SchurOperatorBase<Field> { class SchurDiagMooeeOperator : public SchurOperatorBase<Field> {
public: public:
Matrix &_Mat; Matrix &_Mat;
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
SchurDiagMooeeOperator (Matrix &Mat): _Mat(Mat){}; SchurDiagMooeeOperator (Matrix &Mat): _Mat(Mat){};
virtual void Mpc (const Field &in, Field &out) { virtual void Mpc (const Field &in, Field &out) {
Field tmp(in.Grid()); Field tmp(in.Grid());
@ -281,6 +295,8 @@ template<class Matrix,class Field>
protected: protected:
Matrix &_Mat; Matrix &_Mat;
public: public:
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
SchurDiagOneOperator (Matrix &Mat): _Mat(Mat){}; SchurDiagOneOperator (Matrix &Mat): _Mat(Mat){};
virtual void Mpc (const Field &in, Field &out) { virtual void Mpc (const Field &in, Field &out) {
@ -307,6 +323,8 @@ template<class Matrix,class Field>
protected: protected:
Matrix &_Mat; Matrix &_Mat;
public: public:
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
SchurDiagTwoOperator (Matrix &Mat): _Mat(Mat){}; SchurDiagTwoOperator (Matrix &Mat): _Mat(Mat){};
virtual void Mpc (const Field &in, Field &out) { virtual void Mpc (const Field &in, Field &out) {
@ -372,6 +390,8 @@ class NonHermitianSchurDiagMooeeOperator : public NonHermitianSchurOperatorBase
{ {
public: public:
Matrix& _Mat; Matrix& _Mat;
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
NonHermitianSchurDiagMooeeOperator(Matrix& Mat): _Mat(Mat){}; NonHermitianSchurDiagMooeeOperator(Matrix& Mat): _Mat(Mat){};
virtual void Mpc(const Field& in, Field& out) { virtual void Mpc(const Field& in, Field& out) {
Field tmp(in.Grid()); Field tmp(in.Grid());
@ -405,6 +425,8 @@ class NonHermitianSchurDiagOneOperator : public NonHermitianSchurOperatorBase<Fi
Matrix &_Mat; Matrix &_Mat;
public: public:
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
NonHermitianSchurDiagOneOperator (Matrix& Mat): _Mat(Mat){}; NonHermitianSchurDiagOneOperator (Matrix& Mat): _Mat(Mat){};
virtual void Mpc(const Field& in, Field& out) { virtual void Mpc(const Field& in, Field& out) {
Field tmp(in.Grid()); Field tmp(in.Grid());
@ -435,6 +457,8 @@ class NonHermitianSchurDiagTwoOperator : public NonHermitianSchurOperatorBase<Fi
Matrix& _Mat; Matrix& _Mat;
public: public:
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
NonHermitianSchurDiagTwoOperator(Matrix& Mat): _Mat(Mat){}; NonHermitianSchurDiagTwoOperator(Matrix& Mat): _Mat(Mat){};
virtual void Mpc(const Field& in, Field& out) { virtual void Mpc(const Field& in, Field& out) {
@ -475,6 +499,8 @@ class SchurStaggeredOperator : public SchurOperatorBase<Field> {
Field tmp; Field tmp;
RealD mass; RealD mass;
public: public:
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
SchurStaggeredOperator (Matrix &Mat): _Mat(Mat), tmp(_Mat.RedBlackGrid()) SchurStaggeredOperator (Matrix &Mat): _Mat(Mat), tmp(_Mat.RedBlackGrid())
{ {
assert( _Mat.isTrivialEE() ); assert( _Mat.isTrivialEE() );

View File

@ -48,6 +48,8 @@ public:
virtual void Mdiag (const Field &in, Field &out)=0; virtual void Mdiag (const Field &in, Field &out)=0;
virtual void Mdir (const Field &in, Field &out,int dir, int disp)=0; virtual void Mdir (const Field &in, Field &out,int dir, int disp)=0;
virtual void MdirAll (const Field &in, std::vector<Field> &out)=0; virtual void MdirAll (const Field &in, std::vector<Field> &out)=0;
virtual std::vector<int> Directions(void) =0;
virtual std::vector<int> Displacements(void)=0;
}; };
///////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////////
@ -73,6 +75,8 @@ public:
virtual void MooeeDag (const Field &in, Field &out)=0; virtual void MooeeDag (const Field &in, Field &out)=0;
virtual void MooeeInvDag (const Field &in, Field &out)=0; virtual void MooeeInvDag (const Field &in, Field &out)=0;
virtual std::vector<int> Directions(void) =0;
virtual std::vector<int> Displacements(void)=0;
}; };
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -28,6 +28,7 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
#ifndef GRID_ALGORITHMS_ITERATIVE_GENERIC_PCG #ifndef GRID_ALGORITHMS_ITERATIVE_GENERIC_PCG
#define GRID_ALGORITHMS_ITERATIVE_GENERIC_PCG #define GRID_ALGORITHMS_ITERATIVE_GENERIC_PCG
NAMESPACE_BEGIN(Grid);
/* /*
* Compared to Tang-2009: P=Pleft. P^T = PRight Q=MssInv. * Compared to Tang-2009: P=Pleft. P^T = PRight Q=MssInv.
* Script A = SolverMatrix * Script A = SolverMatrix
@ -50,53 +51,54 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
* Vout = x * Vout = x
*/ */
// abstract base
template<class Field, class CoarseField> template<class Field, class CoarseField, class Aggregates>
class TwoLevelFlexiblePcg : public LinearFunction<Field> class TwoLevelFlexiblePcg : public LinearFunction<Field>
{ {
public: public:
int verbose; int verbose;
RealD Tolerance; RealD Tolerance;
Integer MaxIterations; Integer MaxIterations;
const int mmax = 5; const int mmax = 4;
GridBase *grid; GridBase *FineGrid;
GridBase *coarsegrid; GridBase *CoarseGrid;
LinearOperatorBase<Field> *_Linop LinearOperatorBase<Field> &_Linop;
OperatorFunction<Field> *_Smoother, LinearFunction<Field> &_Smoother;
LinearFunction<CoarseField> *_CoarseSolver; LinearFunction<CoarseField> &_CoarseSolver;
Aggregates &_Aggregates;
// Need somthing that knows how to get from Coarse to fine and back again
// more most opertor functions // more most opertor functions
TwoLevelFlexiblePcg(RealD tol, TwoLevelFlexiblePcg(RealD tol,
Integer maxit, Integer maxit,
LinearOperatorBase<Field> *Linop, LinearOperatorBase<Field> *Linop,
LinearOperatorBase<Field> *SmootherLinop, LinearFunction<Field> *Smoother,
OperatorFunction<Field> *Smoother, LinearFunction<CoarseField> *CoarseSolver,
OperatorFunction<CoarseField> CoarseLinop Aggregates *AggP
) : ) :
Tolerance(tol), Tolerance(tol),
MaxIterations(maxit), MaxIterations(maxit),
_Linop(Linop), _Linop(*Linop),
_PreconditionerLinop(PrecLinop), _Smoother(*Smoother),
_Preconditioner(Preconditioner) _CoarseSolver(*CoarseSolver),
_Aggregates(*AggP)
{ {
CoarseGrid=_Aggregates.CoarseGrid;
FineGrid=_Aggregates.FineGrid;
verbose=0; verbose=0;
}; };
// The Pcg routine is common to all, but the various matrices differ from derived // The Pcg routine is common to all, but the various matrices differ from derived
// implementation to derived implmentation // implementation to derived implmentation
void operator() (const Field &src, Field &psi){
void operator() (const Field &src, Field &psi){ void operator() (const Field &src, Field &psi){
psi.Checkerboard() = src.Checkerboard(); psi.Checkerboard() = src.Checkerboard();
grid = src.Grid();
RealD f;
RealD rtzp,rtz,a,d,b; RealD rtzp,rtz,a,d,b;
RealD rptzp; // RealD rptzp;
RealD tn; // RealD tn;
RealD guess = norm2(psi); RealD guess = norm2(psi);
RealD ssq = norm2(src); RealD ssq = norm2(src);
RealD rsq = ssq*Tolerance*Tolerance; RealD rsq = ssq*Tolerance*Tolerance;
@ -104,15 +106,15 @@ class TwoLevelFlexiblePcg : public LinearFunction<Field>
///////////////////////////// /////////////////////////////
// Set up history vectors // Set up history vectors
///////////////////////////// /////////////////////////////
std::vector<Field> p (mmax,grid); std::vector<Field> p (mmax,FineGrid);
std::vector<Field> mmp(mmax,grid); std::vector<Field> mmp(mmax,FineGrid);
std::vector<RealD> pAp(mmax); std::vector<RealD> pAp(mmax);
Field x (grid); x = psi; Field x (FineGrid); x = psi;
Field z (grid); Field z (FineGrid);
Field tmp(grid); Field tmp(FineGrid);
Field r (grid); Field r (FineGrid);
Field mu (grid); Field mu (FineGrid);
////////////////////////// //////////////////////////
// x0 = Vstart -- possibly modify guess // x0 = Vstart -- possibly modify guess
@ -121,13 +123,13 @@ class TwoLevelFlexiblePcg : public LinearFunction<Field>
Vstart(x,src); Vstart(x,src);
// r0 = b -A x0 // r0 = b -A x0
HermOp(x,mmp); // Shouldn't this be something else? _Linop.HermOp(x,mmp[0]); // Shouldn't this be something else?
axpy (r, -1.0,mmp[0], src); // Recomputes r=src-Ax0 axpy (r, -1.0,mmp[0], src); // Recomputes r=src-Ax0
////////////////////////////////// //////////////////////////////////
// Compute z = M1 x // Compute z = M1 x
////////////////////////////////// //////////////////////////////////
M1(r,z,tmp,mp,SmootherMirs); M1(r,z);
rtzp =real(innerProduct(r,z)); rtzp =real(innerProduct(r,z));
/////////////////////////////////////// ///////////////////////////////////////
@ -143,7 +145,7 @@ class TwoLevelFlexiblePcg : public LinearFunction<Field>
int peri_kp = (k+1) % mmax; int peri_kp = (k+1) % mmax;
rtz=rtzp; rtz=rtzp;
d= M3(p[peri_k],mp,mmp[peri_k],tmp); d= M3(p[peri_k],mmp[peri_k]);
a = rtz/d; a = rtz/d;
// Memorise this // Memorise this
@ -153,13 +155,13 @@ class TwoLevelFlexiblePcg : public LinearFunction<Field>
RealD rn = axpy_norm(r,-a,mmp[peri_k],r); RealD rn = axpy_norm(r,-a,mmp[peri_k],r);
// Compute z = M x // Compute z = M x
M1(r,z,tmp,mp); M1(r,z);
rtzp =real(innerProduct(r,z)); rtzp =real(innerProduct(r,z));
M2(z,mu); // ADEF-2 this is identity. Axpy possible to eliminate M2(z,mu); // ADEF-2 this is identity. Axpy possible to eliminate
p[peri_kp]=p[peri_k]; p[peri_kp]=mu;
// Standard search direction p -> z + b p ; b = // Standard search direction p -> z + b p ; b =
b = (rtzp)/rtz; b = (rtzp)/rtz;
@ -181,7 +183,7 @@ class TwoLevelFlexiblePcg : public LinearFunction<Field>
// Stopping condition // Stopping condition
if ( rn <= rsq ) { if ( rn <= rsq ) {
HermOp(x,mmp); // Shouldn't this be something else? _Linop.HermOp(x,mmp[0]); // Shouldn't this be something else?
axpy(tmp,-1.0,src,mmp[0]); axpy(tmp,-1.0,src,mmp[0]);
RealD psinorm = sqrt(norm2(x)); RealD psinorm = sqrt(norm2(x));
@ -190,7 +192,8 @@ class TwoLevelFlexiblePcg : public LinearFunction<Field>
RealD true_residual = tmpnorm/srcnorm; RealD true_residual = tmpnorm/srcnorm;
std::cout<<GridLogMessage<<"TwoLevelfPcg: true residual is "<<true_residual<<std::endl; std::cout<<GridLogMessage<<"TwoLevelfPcg: true residual is "<<true_residual<<std::endl;
std::cout<<GridLogMessage<<"TwoLevelfPcg: target residual was"<<Tolerance<<std::endl; std::cout<<GridLogMessage<<"TwoLevelfPcg: target residual was"<<Tolerance<<std::endl;
return k;
return;
} }
} }
// Non-convergence // Non-convergence
@ -199,48 +202,40 @@ class TwoLevelFlexiblePcg : public LinearFunction<Field>
public: public:
virtual void M(Field & in,Field & out,Field & tmp) { virtual void M1(Field & in, Field & out)
{// the smoother
}
virtual void M1(Field & in, Field & out) {// the smoother
// [PTM+Q] in = [1 - Q A] M in + Q in = Min + Q [ in -A Min] // [PTM+Q] in = [1 - Q A] M in + Q in = Min + Q [ in -A Min]
Field tmp(grid); Field tmp(FineGrid);
Field Min(grid); Field Min(FineGrid);
PcgM(in,Min); // Smoother call CoarseField PleftProj(CoarseGrid);
CoarseField PleftMss_proj(CoarseGrid);
HermOp(Min,out); _Smoother(in,Min); // Smoother call
_Linop.HermOp(Min,out);
axpy(tmp,-1.0,out,in); // tmp = in - A Min axpy(tmp,-1.0,out,in); // tmp = in - A Min
ProjectToSubspace(tmp,PleftProj); _Aggregates.ProjectToSubspace(PleftProj,tmp);
ApplyInverse(PleftProj,PleftMss_proj); // Ass^{-1} [in - A Min]_s _CoarseSolver(PleftProj,PleftMss_proj); // Ass^{-1} [in - A Min]_s
PromoteFromSubspace(PleftMss_proj,tmp);// tmp = Q[in - A Min] _Aggregates.PromoteFromSubspace(PleftMss_proj,tmp);// tmp = Q[in - A Min]
axpy(out,1.0,Min,tmp); // Min+tmp axpy(out,1.0,Min,tmp); // Min+tmp
} }
virtual void M2(const Field & in, Field & out) { virtual void M2(const Field & in, Field & out)
{
out=in; out=in;
// Must override for Def2 only
// case PcgDef2:
// Pright(in,out);
// break;
} }
virtual RealD M3(const Field & p, Field & mmp){ virtual RealD M3(const Field & p, Field & mmp)
{
double d,dd; double d,dd;
HermOpAndNorm(p,mmp,d,dd); _Linop.HermOpAndNorm(p,mmp,d,dd);
return dd; return dd;
// Must override for Def1 only
// case PcgDef1:
// d=linop_d->Mprec(p,mmp,tmp,0,1);// Dag no
// linop_d->Mprec(mmp,mp,tmp,1);// Dag yes
// Pleft(mp,mmp);
// d=real(linop_d->inner(p,mmp));
} }
virtual void VstartDef2(Field & xconst Field & src){ virtual void Vstart(Field & x,const Field & src)
{
//case PcgDef2: //case PcgDef2:
//case PcgAdef2: //case PcgAdef2:
//case PcgAdef2f: //case PcgAdef2f:
@ -256,142 +251,79 @@ class TwoLevelFlexiblePcg : public LinearFunction<Field>
// = src_s - (A guess)_s - src_s + (A guess)_s // = src_s - (A guess)_s - src_s + (A guess)_s
// = 0 // = 0
/////////////////////////////////// ///////////////////////////////////
Field r(grid); Field r(FineGrid);
Field mmp(grid); Field mmp(FineGrid);
HermOp(x,mmp); CoarseField PleftProj(CoarseGrid);
CoarseField PleftMss_proj(CoarseGrid);
_Linop.HermOp(x,mmp);
axpy (r, -1.0, mmp, src); // r_{-1} = src - A x axpy (r, -1.0, mmp, src); // r_{-1} = src - A x
ProjectToSubspace(r,PleftProj); _Aggregates.ProjectToSubspace(PleftProj,r);
ApplyInverseCG(PleftProj,PleftMss_proj); // Ass^{-1} r_s _CoarseSolver(PleftProj,PleftMss_proj); // Ass^{-1} r_s
PromoteFromSubspace(PleftMss_proj,mmp); _Aggregates.PromoteFromSubspace(PleftMss_proj,mmp);
x=x+mmp; x=x+mmp;
} }
virtual void Vstart(Field & x,const Field & src){
return;
}
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
// Only Def1 has non-trivial Vout. Override in Def1 // Only Def1 has non-trivial Vout. Override in Def1
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
virtual void Vout (Field & in, Field & out,Field & src){ virtual void Vout (Field & in, Field & out,Field & src){
out = in; out = in;
//case PcgDef1:
// //Qb + PT x
// ProjectToSubspace(src,PleftProj);
// ApplyInverse(PleftProj,PleftMss_proj); // Ass^{-1} r_s
// PromoteFromSubspace(PleftMss_proj,tmp);
//
// Pright(in,out);
//
// linop_d->axpy(out,tmp,out,1.0);
// break;
} }
//////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////
// Pright and Pleft are common to all implementations // Pright and Pleft are common to all implementations
//////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////
virtual void Pright(Field & in,Field & out){ virtual void Pright(Field & in,Field & out)
{
// P_R = [ 1 0 ] // P_R = [ 1 0 ]
// [ -Mss^-1 Msb 0 ] // [ -Mss^-1 Msb 0 ]
Field in_sbar(grid); Field in_sbar(FineGrid);
ProjectToSubspace(in,PleftProj); CoarseField PleftProj(CoarseGrid);
PromoteFromSubspace(PleftProj,out); CoarseField PleftMss_proj(CoarseGrid);
_Aggregates.ProjectToSubspace(PleftProj,in);
_Aggregates.PromoteFromSubspace(PleftProj,out);
axpy(in_sbar,-1.0,out,in); // in_sbar = in - in_s axpy(in_sbar,-1.0,out,in); // in_sbar = in - in_s
HermOp(in_sbar,out); _Linop.HermOp(in_sbar,out);
ProjectToSubspace(out,PleftProj); // Mssbar in_sbar (project) _Aggregates.ProjectToSubspace(PleftProj,out); // Mssbar in_sbar (project)
ApplyInverse (PleftProj,PleftMss_proj); // Mss^{-1} Mssbar _CoarseSolver(PleftProj,PleftMss_proj); // Mss^{-1} Mssbar
PromoteFromSubspace(PleftMss_proj,out); // _Aggregates.PromoteFromSubspace(PleftMss_proj,out); //
axpy(out,-1.0,out,in_sbar); // in_sbar - Mss^{-1} Mssbar in_sbar axpy(out,-1.0,out,in_sbar); // in_sbar - Mss^{-1} Mssbar in_sbar
} }
virtual void Pleft (Field & in,Field & out){ virtual void Pleft (Field & in,Field & out)
{
// P_L = [ 1 -Mbs Mss^-1] // P_L = [ 1 -Mbs Mss^-1]
// [ 0 0 ] // [ 0 0 ]
Field in_sbar(grid); Field in_sbar(FineGrid);
Field tmp2(grid); Field tmp2(FineGrid);
Field Mtmp(grid); Field Mtmp(FineGrid);
ProjectToSubspace(in,PleftProj); CoarseField PleftProj(CoarseGrid);
PromoteFromSubspace(PleftProj,out); CoarseField PleftMss_proj(CoarseGrid);
_Aggregates.ProjectToSubspace(PleftProj,in);
_Aggregates.PromoteFromSubspace(PleftProj,out);
axpy(in_sbar,-1.0,out,in); // in_sbar = in - in_s axpy(in_sbar,-1.0,out,in); // in_sbar = in - in_s
ApplyInverse(PleftProj,PleftMss_proj); // Mss^{-1} in_s _CoarseSolver(PleftProj,PleftMss_proj); // Mss^{-1} in_s
PromoteFromSubspace(PleftMss_proj,out); _Aggregates.PromoteFromSubspace(PleftMss_proj,out);
HermOp(out,Mtmp); _Linop.HermOp(out,Mtmp);
ProjectToSubspace(Mtmp,PleftProj); // Msbar s Mss^{-1} _Aggregates.ProjectToSubspace(PleftProj,Mtmp); // Msbar s Mss^{-1}
PromoteFromSubspace(PleftProj,tmp2); _Aggregates.PromoteFromSubspace(PleftProj,tmp2);
axpy(out,-1.0,tmp2,Mtmp); axpy(out,-1.0,tmp2,Mtmp);
axpy(out,-1.0,out,in_sbar); // in_sbar - Msbars Mss^{-1} in_s axpy(out,-1.0,out,in_sbar); // in_sbar - Msbars Mss^{-1} in_s
} }
} };
NAMESPACE_END(Grid);
template<class Field>
class TwoLevelFlexiblePcgADef2 : public TwoLevelFlexiblePcg<Field> {
public:
virtual void M(Field & in,Field & out,Field & tmp){
}
virtual void M1(Field & in, Field & out,Field & tmp,Field & mp){
}
virtual void M2(Field & in, Field & out){
}
virtual RealD M3(Field & p, Field & mp,Field & mmp, Field & tmp){
}
virtual void Vstart(Field & in, Field & src, Field & r, Field & mp, Field & mmp, Field & tmp){
}
}
/*
template<class Field>
class TwoLevelFlexiblePcgAD : public TwoLevelFlexiblePcg<Field> {
public:
virtual void M(Field & in,Field & out,Field & tmp);
virtual void M1(Field & in, Field & out,Field & tmp,Field & mp);
virtual void M2(Field & in, Field & out);
virtual RealD M3(Field & p, Field & mp,Field & mmp, Field & tmp);
virtual void Vstart(Field & in, Field & src, Field & r, Field & mp, Field & mmp, Field & tmp);
}
template<class Field>
class TwoLevelFlexiblePcgDef1 : public TwoLevelFlexiblePcg<Field> {
public:
virtual void M(Field & in,Field & out,Field & tmp);
virtual void M1(Field & in, Field & out,Field & tmp,Field & mp);
virtual void M2(Field & in, Field & out);
virtual RealD M3(Field & p, Field & mp,Field & mmp, Field & tmp);
virtual void Vstart(Field & in, Field & src, Field & r, Field & mp, Field & mmp, Field & tmp);
virtual void Vout (Field & in, Field & out,Field & src,Field & tmp);
}
template<class Field>
class TwoLevelFlexiblePcgDef2 : public TwoLevelFlexiblePcg<Field> {
public:
virtual void M(Field & in,Field & out,Field & tmp);
virtual void M1(Field & in, Field & out,Field & tmp,Field & mp);
virtual void M2(Field & in, Field & out);
virtual RealD M3(Field & p, Field & mp,Field & mmp, Field & tmp);
virtual void Vstart(Field & in, Field & src, Field & r, Field & mp, Field & mmp, Field & tmp);
}
template<class Field>
class TwoLevelFlexiblePcgV11: public TwoLevelFlexiblePcg<Field> {
public:
virtual void M(Field & in,Field & out,Field & tmp);
virtual void M1(Field & in, Field & out,Field & tmp,Field & mp);
virtual void M2(Field & in, Field & out);
virtual RealD M3(Field & p, Field & mp,Field & mmp, Field & tmp);
virtual void Vstart(Field & in, Field & src, Field & r, Field & mp, Field & mmp, Field & tmp);
}
*/
#endif #endif

View File

@ -60,6 +60,8 @@ public:
DeflatedGuesser(const std::vector<Field> & _evec,const std::vector<RealD> & _eval) : evec(_evec), eval(_eval) {}; DeflatedGuesser(const std::vector<Field> & _evec,const std::vector<RealD> & _eval) : evec(_evec), eval(_eval) {};
virtual void operator()(const Field &src,Field &guess) { virtual void operator()(const Field &src,Field &guess) {
RealD t=-usecond();
guess = Zero(); guess = Zero();
assert(evec.size()==eval.size()); assert(evec.size()==eval.size());
auto N = evec.size(); auto N = evec.size();
@ -68,6 +70,8 @@ public:
axpy(guess,TensorRemove(innerProduct(tmp,src)) / eval[i],tmp,guess); axpy(guess,TensorRemove(innerProduct(tmp,src)) / eval[i],tmp,guess);
} }
guess.Checkerboard() = src.Checkerboard(); guess.Checkerboard() = src.Checkerboard();
t+=usecond();
std::cout<<GridLogMessage<<"\t\t\t" << "Deflated guess took "<< t/1000.0<< "ms" <<std::endl;
} }
}; };

View File

@ -59,7 +59,7 @@ public:
GridBase *grid = src.Grid(); GridBase *grid = src.Grid();
Field r(grid), p(grid), Ap(grid), Ar(grid), z(grid); Field r(grid), p(grid), Ap(grid), Ar(grid), z(grid);
psi=zero; psi=Zero();
r = src; r = src;
Preconditioner(r,p); Preconditioner(r,p);

View File

@ -53,7 +53,11 @@ public:
{ {
size_type bytes = __n*sizeof(_Tp); size_type bytes = __n*sizeof(_Tp);
profilerAllocate(bytes); profilerAllocate(bytes);
#ifdef GRID_UVM
_Tp *ptr = (_Tp*) MemoryManager::SharedAllocate(bytes);
#else
_Tp *ptr = (_Tp*) MemoryManager::CpuAllocate(bytes); _Tp *ptr = (_Tp*) MemoryManager::CpuAllocate(bytes);
#endif
assert( ( (_Tp*)ptr != (_Tp *)NULL ) ); assert( ( (_Tp*)ptr != (_Tp *)NULL ) );
return ptr; return ptr;
} }
@ -62,11 +66,14 @@ public:
{ {
size_type bytes = __n * sizeof(_Tp); size_type bytes = __n * sizeof(_Tp);
profilerFree(bytes); profilerFree(bytes);
#ifdef GRID_UVM
MemoryManager::SharedFree((void *)__p,bytes);
#else
MemoryManager::CpuFree((void *)__p,bytes); MemoryManager::CpuFree((void *)__p,bytes);
#endif
} }
// FIXME: hack for the copy constructor, eventually it must be avoided // FIXME: hack for the copy constructor: it must be avoided to avoid single thread loop
//void construct(pointer __p, const _Tp& __val) { new((void *)__p) _Tp(__val); };
void construct(pointer __p, const _Tp& __val) { assert(0);}; void construct(pointer __p, const _Tp& __val) { assert(0);};
void construct(pointer __p) { }; void construct(pointer __p) { };
void destroy(pointer __p) { }; void destroy(pointer __p) { };
@ -74,6 +81,9 @@ public:
template<typename _Tp> inline bool operator==(const alignedAllocator<_Tp>&, const alignedAllocator<_Tp>&){ return true; } template<typename _Tp> inline bool operator==(const alignedAllocator<_Tp>&, const alignedAllocator<_Tp>&){ return true; }
template<typename _Tp> inline bool operator!=(const alignedAllocator<_Tp>&, const alignedAllocator<_Tp>&){ return false; } template<typename _Tp> inline bool operator!=(const alignedAllocator<_Tp>&, const alignedAllocator<_Tp>&){ return false; }
//////////////////////////////////////////////////////////////////////////////////////
// Unified virtual memory
//////////////////////////////////////////////////////////////////////////////////////
template<typename _Tp> template<typename _Tp>
class uvmAllocator { class uvmAllocator {
public: public:
@ -109,22 +119,63 @@ public:
MemoryManager::SharedFree((void *)__p,bytes); MemoryManager::SharedFree((void *)__p,bytes);
} }
// FIXME: hack for the copy constructor, eventually it must be avoided
void construct(pointer __p, const _Tp& __val) { new((void *)__p) _Tp(__val); }; void construct(pointer __p, const _Tp& __val) { new((void *)__p) _Tp(__val); };
//void construct(pointer __p, const _Tp& __val) { };
void construct(pointer __p) { }; void construct(pointer __p) { };
void destroy(pointer __p) { }; void destroy(pointer __p) { };
}; };
template<typename _Tp> inline bool operator==(const uvmAllocator<_Tp>&, const uvmAllocator<_Tp>&){ return true; } template<typename _Tp> inline bool operator==(const uvmAllocator<_Tp>&, const uvmAllocator<_Tp>&){ return true; }
template<typename _Tp> inline bool operator!=(const uvmAllocator<_Tp>&, const uvmAllocator<_Tp>&){ return false; } template<typename _Tp> inline bool operator!=(const uvmAllocator<_Tp>&, const uvmAllocator<_Tp>&){ return false; }
////////////////////////////////////////////////////////////////////////////////
// Device memory
////////////////////////////////////////////////////////////////////////////////
template<typename _Tp>
class devAllocator {
public:
typedef std::size_t size_type;
typedef std::ptrdiff_t difference_type;
typedef _Tp* pointer;
typedef const _Tp* const_pointer;
typedef _Tp& reference;
typedef const _Tp& const_reference;
typedef _Tp value_type;
template<typename _Tp1> struct rebind { typedef devAllocator<_Tp1> other; };
devAllocator() throw() { }
devAllocator(const devAllocator&) throw() { }
template<typename _Tp1> devAllocator(const devAllocator<_Tp1>&) throw() { }
~devAllocator() throw() { }
pointer address(reference __x) const { return &__x; }
size_type max_size() const throw() { return size_t(-1) / sizeof(_Tp); }
pointer allocate(size_type __n, const void* _p= 0)
{
size_type bytes = __n*sizeof(_Tp);
profilerAllocate(bytes);
_Tp *ptr = (_Tp*) MemoryManager::AcceleratorAllocate(bytes);
assert( ( (_Tp*)ptr != (_Tp *)NULL ) );
return ptr;
}
void deallocate(pointer __p, size_type __n)
{
size_type bytes = __n * sizeof(_Tp);
profilerFree(bytes);
MemoryManager::AcceleratorFree((void *)__p,bytes);
}
void construct(pointer __p, const _Tp& __val) { };
void construct(pointer __p) { };
void destroy(pointer __p) { };
};
template<typename _Tp> inline bool operator==(const devAllocator<_Tp>&, const devAllocator<_Tp>&){ return true; }
template<typename _Tp> inline bool operator!=(const devAllocator<_Tp>&, const devAllocator<_Tp>&){ return false; }
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
// Template typedefs // Template typedefs
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
template<class T> using commAllocator = uvmAllocator<T>; //template<class T> using commAllocator = devAllocator<T>;
template<class T> using Vector = std::vector<T,uvmAllocator<T> >; template<class T> using Vector = std::vector<T,uvmAllocator<T> >;
template<class T> using commVector = std::vector<T,uvmAllocator<T> >; template<class T> using commVector = std::vector<T,devAllocator<T> >;
//template<class T> using Matrix = std::vector<std::vector<T,alignedAllocator<T> > >;
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -9,11 +9,13 @@ NAMESPACE_BEGIN(Grid);
#define AccSmall (3) #define AccSmall (3)
#define Shared (4) #define Shared (4)
#define SharedSmall (5) #define SharedSmall (5)
uint64_t total_cache;
uint64_t total_shared; uint64_t total_shared;
uint64_t total_device; uint64_t total_device;
uint64_t total_host;; uint64_t total_host;;
void MemoryManager::PrintBytes(void) void MemoryManager::PrintBytes(void)
{ {
std::cout << " MemoryManager : "<<total_cache <<" cache bytes "<<std::endl;
std::cout << " MemoryManager : "<<total_shared<<" shared bytes "<<std::endl; std::cout << " MemoryManager : "<<total_shared<<" shared bytes "<<std::endl;
std::cout << " MemoryManager : "<<total_device<<" accelerator bytes "<<std::endl; std::cout << " MemoryManager : "<<total_device<<" accelerator bytes "<<std::endl;
std::cout << " MemoryManager : "<<total_host <<" cpu bytes "<<std::endl; std::cout << " MemoryManager : "<<total_host <<" cpu bytes "<<std::endl;
@ -35,6 +37,8 @@ void *MemoryManager::AcceleratorAllocate(size_t bytes)
if ( ptr == (void *) NULL ) { if ( ptr == (void *) NULL ) {
ptr = (void *) acceleratorAllocDevice(bytes); ptr = (void *) acceleratorAllocDevice(bytes);
total_device+=bytes; total_device+=bytes;
} else {
// std::cout <<"AcceleratorAllocate: cache hit Device pointer "<<std::hex<<ptr<<std::dec<<" "<<bytes<<std::endl;
} }
return ptr; return ptr;
} }
@ -53,8 +57,10 @@ void *MemoryManager::SharedAllocate(size_t bytes)
if ( ptr == (void *) NULL ) { if ( ptr == (void *) NULL ) {
ptr = (void *) acceleratorAllocShared(bytes); ptr = (void *) acceleratorAllocShared(bytes);
total_shared+=bytes; total_shared+=bytes;
// std::cout <<"AcceleratorAllocate: allocated Shared pointer "<<std::hex<<ptr<<std::dec<<std::endl; // std::cout <<"SharedAllocate: allocated Shared pointer "<<std::hex<<ptr<<std::dec<<std::endl;
// PrintBytes(); // PrintBytes();
} else {
// std::cout <<"SharedAllocate: cache hit Shared pointer "<<std::hex<<ptr<<std::dec<<" "<<bytes<<std::endl;
} }
return ptr; return ptr;
} }
@ -74,6 +80,9 @@ void *MemoryManager::CpuAllocate(size_t bytes)
if ( ptr == (void *) NULL ) { if ( ptr == (void *) NULL ) {
ptr = (void *) acceleratorAllocShared(bytes); ptr = (void *) acceleratorAllocShared(bytes);
total_host+=bytes; total_host+=bytes;
// std::cout <<"CpuAllocate: allocated Cpu pointer "<<std::hex<<ptr<<std::dec<<std::endl;
} else {
// std::cout <<"CpufAllocate: cache hit Cpu pointer "<<std::hex<<ptr<<std::dec<<" "<<bytes<<std::endl;
} }
return ptr; return ptr;
} }
@ -120,7 +129,7 @@ void MemoryManager::Init(void)
str= getenv("GRID_ALLOC_NCACHE_LARGE"); str= getenv("GRID_ALLOC_NCACHE_LARGE");
if ( str ) { if ( str ) {
Nc = atoi(str); Nc = atoi(str);
if ( (Nc>=0) && (Nc < NallocCacheMax)) { if ( (Nc>=0) && (Nc <= NallocCacheMax)) {
Ncache[Cpu]=Nc; Ncache[Cpu]=Nc;
Ncache[Acc]=Nc; Ncache[Acc]=Nc;
Ncache[Shared]=Nc; Ncache[Shared]=Nc;
@ -130,12 +139,21 @@ void MemoryManager::Init(void)
str= getenv("GRID_ALLOC_NCACHE_SMALL"); str= getenv("GRID_ALLOC_NCACHE_SMALL");
if ( str ) { if ( str ) {
Nc = atoi(str); Nc = atoi(str);
if ( (Nc>=0) && (Nc < NallocCacheMax)) { if ( (Nc>=0) && (Nc <= NallocCacheMax)) {
Ncache[CpuSmall]=Nc; Ncache[CpuSmall]=Nc;
Ncache[AccSmall]=Nc; Ncache[AccSmall]=Nc;
Ncache[SharedSmall]=Nc; Ncache[SharedSmall]=Nc;
} }
} }
}
void MemoryManager::InitMessage(void) {
#ifndef GRID_UVM
std::cout << GridLogMessage << "MemoryManager Cache "<< MemoryManager::DeviceMaxBytes <<" bytes "<<std::endl;
#endif
std::cout << GridLogMessage<< "MemoryManager::Init() setting up"<<std::endl; std::cout << GridLogMessage<< "MemoryManager::Init() setting up"<<std::endl;
#ifdef ALLOCATION_CACHE #ifdef ALLOCATION_CACHE
std::cout << GridLogMessage<< "MemoryManager::Init() cache pool for recent allocations: SMALL "<<Ncache[CpuSmall]<<" LARGE "<<Ncache[Cpu]<<std::endl; std::cout << GridLogMessage<< "MemoryManager::Init() cache pool for recent allocations: SMALL "<<Ncache[CpuSmall]<<" LARGE "<<Ncache[Cpu]<<std::endl;
@ -164,6 +182,7 @@ void MemoryManager::Init(void)
std::cout << GridLogMessage<< "MemoryManager::Init() Using SYCL malloc_device"<<std::endl; std::cout << GridLogMessage<< "MemoryManager::Init() Using SYCL malloc_device"<<std::endl;
#endif #endif
#endif #endif
} }
void *MemoryManager::Insert(void *ptr,size_t bytes,int type) void *MemoryManager::Insert(void *ptr,size_t bytes,int type)
@ -201,6 +220,7 @@ void *MemoryManager::Insert(void *ptr,size_t bytes,AllocationCacheEntry *entries
if ( entries[v].valid ) { if ( entries[v].valid ) {
ret = entries[v].address; ret = entries[v].address;
total_cache-=entries[v].bytes;
entries[v].valid = 0; entries[v].valid = 0;
entries[v].address = NULL; entries[v].address = NULL;
entries[v].bytes = 0; entries[v].bytes = 0;
@ -209,6 +229,7 @@ void *MemoryManager::Insert(void *ptr,size_t bytes,AllocationCacheEntry *entries
entries[v].address=ptr; entries[v].address=ptr;
entries[v].bytes =bytes; entries[v].bytes =bytes;
entries[v].valid =1; entries[v].valid =1;
total_cache+=entries[v].bytes;
return ret; return ret;
} }
@ -233,6 +254,7 @@ void *MemoryManager::Lookup(size_t bytes,AllocationCacheEntry *entries,int ncach
for(int e=0;e<ncache;e++){ for(int e=0;e<ncache;e++){
if ( entries[e].valid && ( entries[e].bytes == bytes ) ) { if ( entries[e].valid && ( entries[e].bytes == bytes ) ) {
entries[e].valid = 0; entries[e].valid = 0;
total_cache-=bytes;
return entries[e].address; return entries[e].address;
} }
} }

View File

@ -93,11 +93,12 @@ private:
static void *Insert(void *ptr,size_t bytes,AllocationCacheEntry *entries,int ncache,int &victim) ; static void *Insert(void *ptr,size_t bytes,AllocationCacheEntry *entries,int ncache,int &victim) ;
static void *Lookup(size_t bytes,AllocationCacheEntry *entries,int ncache) ; static void *Lookup(size_t bytes,AllocationCacheEntry *entries,int ncache) ;
public:
static void PrintBytes(void);
static void Init(void);
static void InitMessage(void);
static void *AcceleratorAllocate(size_t bytes); static void *AcceleratorAllocate(size_t bytes);
static void AcceleratorFree (void *ptr,size_t bytes); static void AcceleratorFree (void *ptr,size_t bytes);
static void PrintBytes(void);
public:
static void Init(void);
static void *SharedAllocate(size_t bytes); static void *SharedAllocate(size_t bytes);
static void SharedFree (void *ptr,size_t bytes); static void SharedFree (void *ptr,size_t bytes);
static void *CpuAllocate(size_t bytes); static void *CpuAllocate(size_t bytes);

View File

@ -43,8 +43,16 @@ void CartesianCommunicator::Init(int *argc, char ***argv)
MPI_Initialized(&flag); // needed to coexist with other libs apparently MPI_Initialized(&flag); // needed to coexist with other libs apparently
if ( !flag ) { if ( !flag ) {
MPI_Init_thread(argc,argv,MPI_THREAD_MULTIPLE,&provided);
#if defined (TOFU) // FUGAKU, credits go to Issaku Kanamori
nCommThreads=1;
// wrong results here too
// For now: comms-overlap leads to wrong results in Benchmark_wilson even on single node MPI runs
// other comms schemes are ok
MPI_Init_thread(argc,argv,MPI_THREAD_SERIALIZED,&provided);
#else
MPI_Init_thread(argc,argv,MPI_THREAD_MULTIPLE,&provided);
#endif
//If only 1 comms thread we require any threading mode other than SINGLE, but for multiple comms threads we need MULTIPLE //If only 1 comms thread we require any threading mode other than SINGLE, but for multiple comms threads we need MULTIPLE
if( (nCommThreads == 1) && (provided == MPI_THREAD_SINGLE) ) { if( (nCommThreads == 1) && (provided == MPI_THREAD_SINGLE) ) {
assert(0); assert(0);
@ -294,60 +302,28 @@ void CartesianCommunicator::SendToRecvFrom(void *xmit,
int bytes) int bytes)
{ {
std::vector<CommsRequest_t> reqs(0); std::vector<CommsRequest_t> reqs(0);
// unsigned long xcrc = crc32(0L, Z_NULL, 0); unsigned long xcrc = crc32(0L, Z_NULL, 0);
// unsigned long rcrc = crc32(0L, Z_NULL, 0); unsigned long rcrc = crc32(0L, Z_NULL, 0);
// xcrc = crc32(xcrc,(unsigned char *)xmit,bytes);
SendToRecvFromBegin(reqs,xmit,dest,recv,from,bytes);
SendToRecvFromComplete(reqs);
// rcrc = crc32(rcrc,(unsigned char *)recv,bytes);
// printf("proc %d SendToRecvFrom %d bytes %lx %lx\n",_processor,bytes,xcrc,rcrc);
}
void CartesianCommunicator::SendRecvPacket(void *xmit,
void *recv,
int sender,
int receiver,
int bytes)
{
MPI_Status stat;
assert(sender != receiver);
int tag = sender;
if ( _processor == sender ) {
MPI_Send(xmit, bytes, MPI_CHAR,receiver,tag,communicator);
}
if ( _processor == receiver ) {
MPI_Recv(recv, bytes, MPI_CHAR,sender,tag,communicator,&stat);
}
}
// Basic Halo comms primitive
void CartesianCommunicator::SendToRecvFromBegin(std::vector<CommsRequest_t> &list,
void *xmit,
int dest,
void *recv,
int from,
int bytes)
{
int myrank = _processor; int myrank = _processor;
int ierr; int ierr;
if ( CommunicatorPolicy == CommunicatorPolicyConcurrent ) { // Enforce no UVM in comms, device or host OK
MPI_Request xrq; assert(acceleratorIsCommunicable(xmit));
MPI_Request rrq; assert(acceleratorIsCommunicable(recv));
ierr =MPI_Irecv(recv, bytes, MPI_CHAR,from,from,communicator,&rrq);
ierr|=MPI_Isend(xmit, bytes, MPI_CHAR,dest,_processor,communicator,&xrq);
assert(ierr==0);
list.push_back(xrq);
list.push_back(rrq);
} else {
// Give the CPU to MPI immediately; can use threads to overlap optionally // Give the CPU to MPI immediately; can use threads to overlap optionally
// printf("proc %d SendToRecvFrom %d bytes Sendrecv \n",_processor,bytes);
ierr=MPI_Sendrecv(xmit,bytes,MPI_CHAR,dest,myrank, ierr=MPI_Sendrecv(xmit,bytes,MPI_CHAR,dest,myrank,
recv,bytes,MPI_CHAR,from, from, recv,bytes,MPI_CHAR,from, from,
communicator,MPI_STATUS_IGNORE); communicator,MPI_STATUS_IGNORE);
assert(ierr==0); assert(ierr==0);
}
}
// xcrc = crc32(xcrc,(unsigned char *)xmit,bytes);
// rcrc = crc32(rcrc,(unsigned char *)recv,bytes);
// printf("proc %d SendToRecvFrom %d bytes xcrc %lx rcrc %lx\n",_processor,bytes,xcrc,rcrc); fflush
}
// Basic Halo comms primitive
double CartesianCommunicator::StencilSendToRecvFrom( void *xmit, double CartesianCommunicator::StencilSendToRecvFrom( void *xmit,
int dest, int dest,
void *recv, void *recv,
@ -403,15 +379,7 @@ double CartesianCommunicator::StencilSendToRecvFromBegin(std::vector<CommsReques
return off_node_bytes; return off_node_bytes;
} }
void CartesianCommunicator::StencilSendToRecvFromComplete(std::vector<CommsRequest_t> &waitall,int dir) void CartesianCommunicator::StencilSendToRecvFromComplete(std::vector<CommsRequest_t> &list,int dir)
{
SendToRecvFromComplete(waitall);
}
void CartesianCommunicator::StencilBarrier(void)
{
MPI_Barrier (ShmComm);
}
void CartesianCommunicator::SendToRecvFromComplete(std::vector<CommsRequest_t> &list)
{ {
int nreq=list.size(); int nreq=list.size();
@ -422,6 +390,13 @@ void CartesianCommunicator::SendToRecvFromComplete(std::vector<CommsRequest_t> &
assert(ierr==0); assert(ierr==0);
list.resize(0); list.resize(0);
} }
void CartesianCommunicator::StencilBarrier(void)
{
MPI_Barrier (ShmComm);
}
//void CartesianCommunicator::SendToRecvFromComplete(std::vector<CommsRequest_t> &list)
//{
//}
void CartesianCommunicator::Barrier(void) void CartesianCommunicator::Barrier(void)
{ {
int ierr = MPI_Barrier(communicator); int ierr = MPI_Barrier(communicator);
@ -483,5 +458,3 @@ void CartesianCommunicator::AllToAll(void *in,void *out,uint64_t words,uint64_t
} }
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -47,7 +47,12 @@ void GlobalSharedMemory::Init(Grid_MPI_Comm comm)
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
// Split into groups that can share memory // Split into groups that can share memory
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
#ifndef GRID_MPI3_SHM_NONE
MPI_Comm_split_type(comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL,&WorldShmComm); MPI_Comm_split_type(comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL,&WorldShmComm);
#else
MPI_Comm_split(comm, WorldRank, 0, &WorldShmComm);
#endif
MPI_Comm_rank(WorldShmComm ,&WorldShmRank); MPI_Comm_rank(WorldShmComm ,&WorldShmRank);
MPI_Comm_size(WorldShmComm ,&WorldShmSize); MPI_Comm_size(WorldShmComm ,&WorldShmSize);
@ -443,7 +448,11 @@ void GlobalSharedMemory::SharedMemoryAllocate(uint64_t bytes, int flags)
/////////////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////////////
// Each MPI rank should allocate our own buffer // Each MPI rank should allocate our own buffer
/////////////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////////////
#ifndef GRID_MPI3_SHM_NONE
auto err = cudaMalloc(&ShmCommBuf, bytes); auto err = cudaMalloc(&ShmCommBuf, bytes);
#else
auto err = cudaMallocManaged(&ShmCommBuf, bytes);
#endif
if ( err != cudaSuccess) { if ( err != cudaSuccess) {
std::cerr << " SharedMemoryMPI.cc cudaMallocManaged failed for " << bytes<<" bytes " <<cudaGetErrorString(err)<< std::endl; std::cerr << " SharedMemoryMPI.cc cudaMallocManaged failed for " << bytes<<" bytes " <<cudaGetErrorString(err)<< std::endl;
exit(EXIT_FAILURE); exit(EXIT_FAILURE);
@ -462,6 +471,7 @@ void GlobalSharedMemory::SharedMemoryAllocate(uint64_t bytes, int flags)
/////////////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////////////
for(int r=0;r<WorldShmSize;r++){ for(int r=0;r<WorldShmSize;r++){
#ifndef GRID_MPI3_SHM_NONE
////////////////////////////////////////////////// //////////////////////////////////////////////////
// If it is me, pass around the IPC access key // If it is me, pass around the IPC access key
////////////////////////////////////////////////// //////////////////////////////////////////////////
@ -501,6 +511,9 @@ void GlobalSharedMemory::SharedMemoryAllocate(uint64_t bytes, int flags)
// Save a copy of the device buffers // Save a copy of the device buffers
/////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////
WorldShmCommBufs[r] = thisBuf; WorldShmCommBufs[r] = thisBuf;
#else
WorldShmCommBufs[r] = ShmCommBuf;
#endif
} }
_ShmAllocBytes=bytes; _ShmAllocBytes=bytes;
@ -705,7 +718,11 @@ void SharedMemory::SetCommunicator(Grid_MPI_Comm comm)
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
// Split into groups that can share memory // Split into groups that can share memory
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
#ifndef GRID_MPI3_SHM_NONE
MPI_Comm_split_type(comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL,&ShmComm); MPI_Comm_split_type(comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL,&ShmComm);
#else
MPI_Comm_split(comm, rank, 0, &ShmComm);
#endif
MPI_Comm_rank(ShmComm ,&ShmRank); MPI_Comm_rank(ShmComm ,&ShmRank);
MPI_Comm_size(ShmComm ,&ShmSize); MPI_Comm_size(ShmComm ,&ShmSize);
ShmCommBufs.resize(ShmSize); ShmCommBufs.resize(ShmSize);

View File

@ -52,23 +52,8 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
NAMESPACE_BEGIN(Grid); NAMESPACE_BEGIN(Grid);
template<typename Op, typename T1> template<class Expression,typename std::enable_if<is_lattice_expr<Expression>::value,void>::type * = nullptr>
auto Cshift(const LatticeUnaryExpression<Op,T1> &expr,int dim,int shift) auto Cshift(const Expression &expr,int dim,int shift) -> decltype(closure(expr))
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1)))>
{
return Cshift(closure(expr),dim,shift);
}
template <class Op, class T1, class T2>
auto Cshift(const LatticeBinaryExpression<Op,T1,T2> &expr,int dim,int shift)
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1),eval(0, expr.arg2)))>
{
return Cshift(closure(expr),dim,shift);
}
template <class Op, class T1, class T2, class T3>
auto Cshift(const LatticeTrinaryExpression<Op,T1,T2,T3> &expr,int dim,int shift)
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1),
eval(0, expr.arg2),
eval(0, expr.arg3)))>
{ {
return Cshift(closure(expr),dim,shift); return Cshift(closure(expr),dim,shift);
} }

View File

@ -76,8 +76,8 @@ Gather_plane_simple (const Lattice<vobj> &rhs,commVector<vobj> &buffer,int dimen
autoView(rhs_v , rhs, AcceleratorRead); autoView(rhs_v , rhs, AcceleratorRead);
auto buffer_p = & buffer[0]; auto buffer_p = & buffer[0];
auto table = &Cshift_table[0]; auto table = &Cshift_table[0];
accelerator_for(i,ent,1,{ accelerator_for(i,ent,vobj::Nsimd(),{
buffer_p[table[i].first]=rhs_v[table[i].second]; coalescedWrite(buffer_p[table[i].first],coalescedRead(rhs_v[table[i].second]));
}); });
} }
} }
@ -185,8 +185,8 @@ template<class vobj> void Scatter_plane_simple (Lattice<vobj> &rhs,commVector<vo
autoView( rhs_v, rhs, AcceleratorWrite); autoView( rhs_v, rhs, AcceleratorWrite);
auto buffer_p = & buffer[0]; auto buffer_p = & buffer[0];
auto table = &Cshift_table[0]; auto table = &Cshift_table[0];
accelerator_for(i,ent,1,{ accelerator_for(i,ent,vobj::Nsimd(),{
rhs_v[table[i].first]=buffer_p[table[i].second]; coalescedWrite(rhs_v[table[i].first],coalescedRead(buffer_p[table[i].second]));
}); });
} }
} }
@ -209,9 +209,11 @@ template<class vobj> void Scatter_plane_merge(Lattice<vobj> &rhs,ExtractPointerA
if(cbmask ==0x3 ) { if(cbmask ==0x3 ) {
autoView( rhs_v , rhs, AcceleratorWrite); autoView( rhs_v , rhs, AcceleratorWrite);
int _slice_stride = rhs.Grid()->_slice_stride[dimension];
int _slice_block = rhs.Grid()->_slice_block[dimension];
accelerator_for2d(n,e1,b,e2,1,{ accelerator_for2d(n,e1,b,e2,1,{
int o = n*rhs.Grid()->_slice_stride[dimension]; int o = n*_slice_stride;
int offset = b+n*rhs.Grid()->_slice_block[dimension]; int offset = b+n*_slice_block;
merge(rhs_v[so+o+b],pointers,offset); merge(rhs_v[so+o+b],pointers,offset);
}); });
} else { } else {
@ -220,6 +222,7 @@ template<class vobj> void Scatter_plane_merge(Lattice<vobj> &rhs,ExtractPointerA
// Test_cshift_red_black code. // Test_cshift_red_black code.
// std::cout << "Scatter_plane merge assert(0); think this is buggy FIXME "<< std::endl;// think this is buggy FIXME // std::cout << "Scatter_plane merge assert(0); think this is buggy FIXME "<< std::endl;// think this is buggy FIXME
std::cout<<" Unthreaded warning -- buffer is not densely packed ??"<<std::endl; std::cout<<" Unthreaded warning -- buffer is not densely packed ??"<<std::endl;
assert(0); // This will fail if hit on GPU
autoView( rhs_v, rhs, CpuWrite); autoView( rhs_v, rhs, CpuWrite);
for(int n=0;n<e1;n++){ for(int n=0;n<e1;n++){
for(int b=0;b<e2;b++){ for(int b=0;b<e2;b++){
@ -280,8 +283,8 @@ template<class vobj> void Copy_plane(Lattice<vobj>& lhs,const Lattice<vobj> &rhs
autoView(rhs_v , rhs, AcceleratorRead); autoView(rhs_v , rhs, AcceleratorRead);
autoView(lhs_v , lhs, AcceleratorWrite); autoView(lhs_v , lhs, AcceleratorWrite);
auto table = &Cshift_table[0]; auto table = &Cshift_table[0];
accelerator_for(i,ent,1,{ accelerator_for(i,ent,vobj::Nsimd(),{
lhs_v[table[i].first]=rhs_v[table[i].second]; coalescedWrite(lhs_v[table[i].first],coalescedRead(rhs_v[table[i].second]));
}); });
} }
} }

View File

@ -36,7 +36,8 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
#include <Grid/lattice/Lattice_local.h> #include <Grid/lattice/Lattice_local.h>
#include <Grid/lattice/Lattice_reduction.h> #include <Grid/lattice/Lattice_reduction.h>
#include <Grid/lattice/Lattice_peekpoke.h> #include <Grid/lattice/Lattice_peekpoke.h>
//#include <Grid/lattice/Lattice_reality.h> #include <Grid/lattice/Lattice_reality.h>
#include <Grid/lattice/Lattice_real_imag.h>
#include <Grid/lattice/Lattice_comparison_utils.h> #include <Grid/lattice/Lattice_comparison_utils.h>
#include <Grid/lattice/Lattice_comparison.h> #include <Grid/lattice/Lattice_comparison.h>
#include <Grid/lattice/Lattice_coordinate.h> #include <Grid/lattice/Lattice_coordinate.h>

View File

@ -42,9 +42,24 @@ NAMESPACE_BEGIN(Grid);
//////////////////////////////////////////////////// ////////////////////////////////////////////////////
// Predicated where support // Predicated where support
//////////////////////////////////////////////////// ////////////////////////////////////////////////////
#ifdef GRID_SIMT
// drop to scalar in SIMT; cleaner in fact
template <class iobj, class vobj, class robj> template <class iobj, class vobj, class robj>
accelerator_inline vobj predicatedWhere(const iobj &predicate, const vobj &iftrue, accelerator_inline vobj predicatedWhere(const iobj &predicate,
const robj &iffalse) { const vobj &iftrue,
const robj &iffalse)
{
Integer mask = TensorRemove(predicate);
typename std::remove_const<vobj>::type ret= iffalse;
if (mask) ret=iftrue;
return ret;
}
#else
template <class iobj, class vobj, class robj>
accelerator_inline vobj predicatedWhere(const iobj &predicate,
const vobj &iftrue,
const robj &iffalse)
{
typename std::remove_const<vobj>::type ret; typename std::remove_const<vobj>::type ret;
typedef typename vobj::scalar_object scalar_object; typedef typename vobj::scalar_object scalar_object;
@ -68,6 +83,7 @@ accelerator_inline vobj predicatedWhere(const iobj &predicate, const vobj &iftru
merge(ret, falsevals); merge(ret, falsevals);
return ret; return ret;
} }
#endif
///////////////////////////////////////////////////// /////////////////////////////////////////////////////
//Specialization of getVectorType for lattices //Specialization of getVectorType for lattices
@ -81,32 +97,62 @@ struct getVectorType<Lattice<T> >{
//-- recursive evaluation of expressions; -- //-- recursive evaluation of expressions; --
// handle leaves of syntax tree // handle leaves of syntax tree
/////////////////////////////////////////////////// ///////////////////////////////////////////////////
template<class sobj> accelerator_inline template<class sobj,
typename std::enable_if<!is_lattice<sobj>::value&&!is_lattice_expr<sobj>::value,sobj>::type * = nullptr>
accelerator_inline
sobj eval(const uint64_t ss, const sobj &arg) sobj eval(const uint64_t ss, const sobj &arg)
{ {
return arg; return arg;
} }
template <class lobj> accelerator_inline template <class lobj> accelerator_inline
const lobj & eval(const uint64_t ss, const LatticeView<lobj> &arg) auto eval(const uint64_t ss, const LatticeView<lobj> &arg) -> decltype(arg(ss))
{
return arg(ss);
}
////////////////////////////////////////////
//-- recursive evaluation of expressions; --
// whole vector return, used only for expression return type inference
///////////////////////////////////////////////////
template<class sobj> accelerator_inline
sobj vecEval(const uint64_t ss, const sobj &arg)
{
return arg;
}
template <class lobj> accelerator_inline
const lobj & vecEval(const uint64_t ss, const LatticeView<lobj> &arg)
{ {
return arg[ss]; return arg[ss];
} }
// What needs this?
// Cannot be legal on accelerator
// Comparison must convert
#if 1
template <class lobj> accelerator_inline
const lobj & eval(const uint64_t ss, const Lattice<lobj> &arg)
{
auto view = arg.View(AcceleratorRead);
return view[ss];
}
#endif
/////////////////////////////////////////////////// ///////////////////////////////////////////////////
// handle nodes in syntax tree- eval one operand // handle nodes in syntax tree- eval one operand
// vecEval needed (but never called as all expressions offloaded) to infer the return type
// in SIMT contexts of closure.
///////////////////////////////////////////////////
template <typename Op, typename T1> accelerator_inline
auto vecEval(const uint64_t ss, const LatticeUnaryExpression<Op, T1> &expr)
-> decltype(expr.op.func( vecEval(ss, expr.arg1)))
{
return expr.op.func( vecEval(ss, expr.arg1) );
}
// vecEval two operands
template <typename Op, typename T1, typename T2> accelerator_inline
auto vecEval(const uint64_t ss, const LatticeBinaryExpression<Op, T1, T2> &expr)
-> decltype(expr.op.func( vecEval(ss,expr.arg1),vecEval(ss,expr.arg2)))
{
return expr.op.func( vecEval(ss,expr.arg1), vecEval(ss,expr.arg2) );
}
// vecEval three operands
template <typename Op, typename T1, typename T2, typename T3> accelerator_inline
auto vecEval(const uint64_t ss, const LatticeTrinaryExpression<Op, T1, T2, T3> &expr)
-> decltype(expr.op.func(vecEval(ss, expr.arg1), vecEval(ss, expr.arg2), vecEval(ss, expr.arg3)))
{
return expr.op.func(vecEval(ss, expr.arg1), vecEval(ss, expr.arg2), vecEval(ss, expr.arg3));
}
///////////////////////////////////////////////////
// handle nodes in syntax tree- eval one operand coalesced
/////////////////////////////////////////////////// ///////////////////////////////////////////////////
template <typename Op, typename T1> accelerator_inline template <typename Op, typename T1> accelerator_inline
auto eval(const uint64_t ss, const LatticeUnaryExpression<Op, T1> &expr) auto eval(const uint64_t ss, const LatticeUnaryExpression<Op, T1> &expr)
@ -114,23 +160,41 @@ auto eval(const uint64_t ss, const LatticeUnaryExpression<Op, T1> &expr)
{ {
return expr.op.func( eval(ss, expr.arg1) ); return expr.op.func( eval(ss, expr.arg1) );
} }
///////////////////////
// eval two operands // eval two operands
///////////////////////
template <typename Op, typename T1, typename T2> accelerator_inline template <typename Op, typename T1, typename T2> accelerator_inline
auto eval(const uint64_t ss, const LatticeBinaryExpression<Op, T1, T2> &expr) auto eval(const uint64_t ss, const LatticeBinaryExpression<Op, T1, T2> &expr)
-> decltype(expr.op.func( eval(ss,expr.arg1),eval(ss,expr.arg2))) -> decltype(expr.op.func( eval(ss,expr.arg1),eval(ss,expr.arg2)))
{ {
return expr.op.func( eval(ss,expr.arg1), eval(ss,expr.arg2) ); return expr.op.func( eval(ss,expr.arg1), eval(ss,expr.arg2) );
} }
///////////////////////
// eval three operands // eval three operands
///////////////////////
template <typename Op, typename T1, typename T2, typename T3> accelerator_inline template <typename Op, typename T1, typename T2, typename T3> accelerator_inline
auto eval(const uint64_t ss, const LatticeTrinaryExpression<Op, T1, T2, T3> &expr) auto eval(const uint64_t ss, const LatticeTrinaryExpression<Op, T1, T2, T3> &expr)
-> decltype(expr.op.func(eval(ss, expr.arg1), eval(ss, expr.arg2), eval(ss, expr.arg3))) -> decltype(expr.op.func(eval(ss, expr.arg1),
eval(ss, expr.arg2),
eval(ss, expr.arg3)))
{ {
return expr.op.func(eval(ss, expr.arg1), eval(ss, expr.arg2), eval(ss, expr.arg3)); #ifdef GRID_SIMT
// Handles Nsimd (vInteger) != Nsimd(ComplexD)
typedef decltype(vecEval(ss, expr.arg2)) rvobj;
typedef typename std::remove_reference<rvobj>::type vobj;
const int Nsimd = vobj::vector_type::Nsimd();
auto vpred = vecEval(ss,expr.arg1);
ExtractBuffer<Integer> mask(Nsimd);
extract<vInteger, Integer>(TensorRemove(vpred), mask);
int s = acceleratorSIMTlane(Nsimd);
return expr.op.func(mask[s],
eval(ss, expr.arg2),
eval(ss, expr.arg3));
#else
return expr.op.func(eval(ss, expr.arg1),
eval(ss, expr.arg2),
eval(ss, expr.arg3));
#endif
} }
////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////
@ -228,7 +292,7 @@ template <typename Op, typename T1, typename T2> inline
void ExpressionViewOpen(LatticeBinaryExpression<Op, T1, T2> &expr) void ExpressionViewOpen(LatticeBinaryExpression<Op, T1, T2> &expr)
{ {
ExpressionViewOpen(expr.arg1); // recurse AST ExpressionViewOpen(expr.arg1); // recurse AST
ExpressionViewOpen(expr.arg2); // recurse AST ExpressionViewOpen(expr.arg2); // rrecurse AST
} }
template <typename Op, typename T1, typename T2, typename T3> template <typename Op, typename T1, typename T2, typename T3>
inline void ExpressionViewOpen(LatticeTrinaryExpression<Op, T1, T2, T3> &expr) inline void ExpressionViewOpen(LatticeTrinaryExpression<Op, T1, T2, T3> &expr)
@ -272,9 +336,8 @@ inline void ExpressionViewClose(LatticeTrinaryExpression<Op, T1, T2, T3> &expr)
// Unary operators and funcs // Unary operators and funcs
//////////////////////////////////////////// ////////////////////////////////////////////
#define GridUnopClass(name, ret) \ #define GridUnopClass(name, ret) \
template <class arg> \
struct name { \ struct name { \
static auto accelerator_inline func(const arg a) -> decltype(ret) { return ret; } \ template<class _arg> static auto accelerator_inline func(const _arg a) -> decltype(ret) { return ret; } \
}; };
GridUnopClass(UnarySub, -a); GridUnopClass(UnarySub, -a);
@ -285,8 +348,6 @@ GridUnopClass(UnaryTrace, trace(a));
GridUnopClass(UnaryTranspose, transpose(a)); GridUnopClass(UnaryTranspose, transpose(a));
GridUnopClass(UnaryTa, Ta(a)); GridUnopClass(UnaryTa, Ta(a));
GridUnopClass(UnaryProjectOnGroup, ProjectOnGroup(a)); GridUnopClass(UnaryProjectOnGroup, ProjectOnGroup(a));
GridUnopClass(UnaryReal, real(a));
GridUnopClass(UnaryImag, imag(a));
GridUnopClass(UnaryToReal, toReal(a)); GridUnopClass(UnaryToReal, toReal(a));
GridUnopClass(UnaryToComplex, toComplex(a)); GridUnopClass(UnaryToComplex, toComplex(a));
GridUnopClass(UnaryTimesI, timesI(a)); GridUnopClass(UnaryTimesI, timesI(a));
@ -305,10 +366,10 @@ GridUnopClass(UnaryExp, exp(a));
// Binary operators // Binary operators
//////////////////////////////////////////// ////////////////////////////////////////////
#define GridBinOpClass(name, combination) \ #define GridBinOpClass(name, combination) \
template <class left, class right> \
struct name { \ struct name { \
template <class _left, class _right> \
static auto accelerator_inline \ static auto accelerator_inline \
func(const left &lhs, const right &rhs) \ func(const _left &lhs, const _right &rhs) \
-> decltype(combination) const \ -> decltype(combination) const \
{ \ { \
return combination; \ return combination; \
@ -328,10 +389,10 @@ GridBinOpClass(BinaryOrOr, lhs || rhs);
// Trinary conditional op // Trinary conditional op
//////////////////////////////////////////////////// ////////////////////////////////////////////////////
#define GridTrinOpClass(name, combination) \ #define GridTrinOpClass(name, combination) \
template <class predicate, class left, class right> \
struct name { \ struct name { \
template <class _predicate,class _left, class _right> \
static auto accelerator_inline \ static auto accelerator_inline \
func(const predicate &pred, const left &lhs, const right &rhs) \ func(const _predicate &pred, const _left &lhs, const _right &rhs) \
-> decltype(combination) const \ -> decltype(combination) const \
{ \ { \
return combination; \ return combination; \
@ -339,17 +400,17 @@ GridBinOpClass(BinaryOrOr, lhs || rhs);
}; };
GridTrinOpClass(TrinaryWhere, GridTrinOpClass(TrinaryWhere,
(predicatedWhere<predicate, (predicatedWhere<
typename std::remove_reference<left>::type, typename std::remove_reference<_predicate>::type,
typename std::remove_reference<right>::type>(pred, lhs,rhs))); typename std::remove_reference<_left>::type,
typename std::remove_reference<_right>::type>(pred, lhs,rhs)));
//////////////////////////////////////////// ////////////////////////////////////////////
// Operator syntactical glue // Operator syntactical glue
//////////////////////////////////////////// ////////////////////////////////////////////
#define GRID_UNOP(name) name
#define GRID_UNOP(name) name<decltype(eval(0, arg))> #define GRID_BINOP(name) name
#define GRID_BINOP(name) name<decltype(eval(0, lhs)), decltype(eval(0, rhs))> #define GRID_TRINOP(name) name
#define GRID_TRINOP(name) name<decltype(eval(0, pred)), decltype(eval(0, lhs)), decltype(eval(0, rhs))>
#define GRID_DEF_UNOP(op, name) \ #define GRID_DEF_UNOP(op, name) \
template <typename T1, typename std::enable_if<is_lattice<T1>::value||is_lattice_expr<T1>::value,T1>::type * = nullptr> \ template <typename T1, typename std::enable_if<is_lattice<T1>::value||is_lattice_expr<T1>::value,T1>::type * = nullptr> \
@ -401,8 +462,6 @@ GRID_DEF_UNOP(trace, UnaryTrace);
GRID_DEF_UNOP(transpose, UnaryTranspose); GRID_DEF_UNOP(transpose, UnaryTranspose);
GRID_DEF_UNOP(Ta, UnaryTa); GRID_DEF_UNOP(Ta, UnaryTa);
GRID_DEF_UNOP(ProjectOnGroup, UnaryProjectOnGroup); GRID_DEF_UNOP(ProjectOnGroup, UnaryProjectOnGroup);
GRID_DEF_UNOP(real, UnaryReal);
GRID_DEF_UNOP(imag, UnaryImag);
GRID_DEF_UNOP(toReal, UnaryToReal); GRID_DEF_UNOP(toReal, UnaryToReal);
GRID_DEF_UNOP(toComplex, UnaryToComplex); GRID_DEF_UNOP(toComplex, UnaryToComplex);
GRID_DEF_UNOP(timesI, UnaryTimesI); GRID_DEF_UNOP(timesI, UnaryTimesI);
@ -435,29 +494,36 @@ GRID_DEF_TRINOP(where, TrinaryWhere);
///////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////
template <class Op, class T1> template <class Op, class T1>
auto closure(const LatticeUnaryExpression<Op, T1> &expr) auto closure(const LatticeUnaryExpression<Op, T1> &expr)
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1)))> -> Lattice<decltype(expr.op.func(vecEval(0, expr.arg1)))>
{ {
Lattice<decltype(expr.op.func(eval(0, expr.arg1)))> ret(expr); Lattice<decltype(expr.op.func(vecEval(0, expr.arg1)))> ret(expr);
return ret; return ret;
} }
template <class Op, class T1, class T2> template <class Op, class T1, class T2>
auto closure(const LatticeBinaryExpression<Op, T1, T2> &expr) auto closure(const LatticeBinaryExpression<Op, T1, T2> &expr)
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1),eval(0, expr.arg2)))> -> Lattice<decltype(expr.op.func(vecEval(0, expr.arg1),vecEval(0, expr.arg2)))>
{ {
Lattice<decltype(expr.op.func(eval(0, expr.arg1),eval(0, expr.arg2)))> ret(expr); Lattice<decltype(expr.op.func(vecEval(0, expr.arg1),vecEval(0, expr.arg2)))> ret(expr);
return ret; return ret;
} }
template <class Op, class T1, class T2, class T3> template <class Op, class T1, class T2, class T3>
auto closure(const LatticeTrinaryExpression<Op, T1, T2, T3> &expr) auto closure(const LatticeTrinaryExpression<Op, T1, T2, T3> &expr)
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1), -> Lattice<decltype(expr.op.func(vecEval(0, expr.arg1),
eval(0, expr.arg2), vecEval(0, expr.arg2),
eval(0, expr.arg3)))> vecEval(0, expr.arg3)))>
{ {
Lattice<decltype(expr.op.func(eval(0, expr.arg1), Lattice<decltype(expr.op.func(vecEval(0, expr.arg1),
eval(0, expr.arg2), vecEval(0, expr.arg2),
eval(0, expr.arg3)))> ret(expr); vecEval(0, expr.arg3)))> ret(expr);
return ret; return ret;
} }
#define EXPRESSION_CLOSURE(function) \
template<class Expression,typename std::enable_if<is_lattice_expr<Expression>::value,void>::type * = nullptr> \
auto function(Expression &expr) -> decltype(function(closure(expr))) \
{ \
return function(closure(expr)); \
}
#undef GRID_UNOP #undef GRID_UNOP
#undef GRID_BINOP #undef GRID_BINOP

View File

@ -123,9 +123,9 @@ public:
auto exprCopy = expr; auto exprCopy = expr;
ExpressionViewOpen(exprCopy); ExpressionViewOpen(exprCopy);
auto me = View(AcceleratorWriteDiscard); auto me = View(AcceleratorWriteDiscard);
accelerator_for(ss,me.size(),1,{ accelerator_for(ss,me.size(),vobj::Nsimd(),{
auto tmp = eval(ss,exprCopy); auto tmp = eval(ss,exprCopy);
vstream(me[ss],tmp); coalescedWrite(me[ss],tmp);
}); });
me.ViewClose(); me.ViewClose();
ExpressionViewClose(exprCopy); ExpressionViewClose(exprCopy);
@ -146,9 +146,9 @@ public:
auto exprCopy = expr; auto exprCopy = expr;
ExpressionViewOpen(exprCopy); ExpressionViewOpen(exprCopy);
auto me = View(AcceleratorWriteDiscard); auto me = View(AcceleratorWriteDiscard);
accelerator_for(ss,me.size(),1,{ accelerator_for(ss,me.size(),vobj::Nsimd(),{
auto tmp = eval(ss,exprCopy); auto tmp = eval(ss,exprCopy);
vstream(me[ss],tmp); coalescedWrite(me[ss],tmp);
}); });
me.ViewClose(); me.ViewClose();
ExpressionViewClose(exprCopy); ExpressionViewClose(exprCopy);
@ -168,9 +168,9 @@ public:
auto exprCopy = expr; auto exprCopy = expr;
ExpressionViewOpen(exprCopy); ExpressionViewOpen(exprCopy);
auto me = View(AcceleratorWriteDiscard); auto me = View(AcceleratorWriteDiscard);
accelerator_for(ss,me.size(),1,{ accelerator_for(ss,me.size(),vobj::Nsimd(),{
auto tmp = eval(ss,exprCopy); auto tmp = eval(ss,exprCopy);
vstream(me[ss],tmp); coalescedWrite(me[ss],tmp);
}); });
me.ViewClose(); me.ViewClose();
ExpressionViewClose(exprCopy); ExpressionViewClose(exprCopy);

View File

@ -54,13 +54,34 @@ void basisRotate(VField &basis,Matrix& Qt,int j0, int j1, int k0,int k1,int Nm)
typedef decltype(basis[0].View(AcceleratorRead)) View; typedef decltype(basis[0].View(AcceleratorRead)) View;
Vector<View> basis_v; basis_v.reserve(basis.size()); Vector<View> basis_v; basis_v.reserve(basis.size());
typedef typename std::remove_reference<decltype(basis_v[0][0])>::type vobj;
typedef typename std::remove_reference<decltype(Qt(0,0))>::type Coeff_t;
GridBase* grid = basis[0].Grid(); GridBase* grid = basis[0].Grid();
for(int k=0;k<basis.size();k++){ for(int k=0;k<basis.size();k++){
basis_v.push_back(basis[k].View(AcceleratorWrite)); basis_v.push_back(basis[k].View(AcceleratorWrite));
} }
#if ( (!defined(GRID_SYCL)) && (!defined(GRID_CUDA)) && (!defined(GRID_HIP)) )
int max_threads = thread_max();
Vector < vobj > Bt(Nm * max_threads);
thread_region
{
vobj* B = &Bt[Nm * thread_num()];
thread_for_in_region(ss, grid->oSites(),{
for(int j=j0; j<j1; ++j) B[j]=0.;
for(int j=j0; j<j1; ++j){
for(int k=k0; k<k1; ++k){
B[j] +=Qt(j,k) * basis_v[k][ss];
}
}
for(int j=j0; j<j1; ++j){
basis_v[j][ss] = B[j];
}
});
}
#else
View *basis_vp = &basis_v[0]; View *basis_vp = &basis_v[0];
int nrot = j1-j0; int nrot = j1-j0;
@ -70,14 +91,12 @@ void basisRotate(VField &basis,Matrix& Qt,int j0, int j1, int k0,int k1,int Nm)
uint64_t oSites =grid->oSites(); uint64_t oSites =grid->oSites();
uint64_t siteBlock=(grid->oSites()+nrot-1)/nrot; // Maximum 1 additional vector overhead uint64_t siteBlock=(grid->oSites()+nrot-1)/nrot; // Maximum 1 additional vector overhead
typedef typename std::remove_reference<decltype(basis_v[0][0])>::type vobj;
Vector <vobj> Bt(siteBlock * nrot); Vector <vobj> Bt(siteBlock * nrot);
auto Bp=&Bt[0]; auto Bp=&Bt[0];
// GPU readable copy of matrix // GPU readable copy of matrix
Vector<double> Qt_jv(Nm*Nm); Vector<Coeff_t> Qt_jv(Nm*Nm);
double *Qt_p = & Qt_jv[0]; Coeff_t *Qt_p = & Qt_jv[0];
thread_for(i,Nm*Nm,{ thread_for(i,Nm*Nm,{
int j = i/Nm; int j = i/Nm;
int k = i%Nm; int k = i%Nm;
@ -118,6 +137,7 @@ void basisRotate(VField &basis,Matrix& Qt,int j0, int j1, int k0,int k1,int Nm)
coalescedWrite(basis_v[jj][sss],coalescedRead(Bp[ss*nrot+j])); coalescedWrite(basis_v[jj][sss],coalescedRead(Bp[ss*nrot+j]));
}); });
} }
#endif
for(int k=0;k<basis.size();k++) basis_v[k].ViewClose(); for(int k=0;k<basis.size();k++) basis_v[k].ViewClose();
} }

View File

@ -42,34 +42,6 @@ NAMESPACE_BEGIN(Grid);
typedef iScalar<vInteger> vPredicate ; typedef iScalar<vInteger> vPredicate ;
/*
template <class iobj, class vobj, class robj> accelerator_inline
vobj predicatedWhere(const iobj &predicate, const vobj &iftrue, const robj &iffalse)
{
typename std::remove_const<vobj>::type ret;
typedef typename vobj::scalar_object scalar_object;
typedef typename vobj::scalar_type scalar_type;
typedef typename vobj::vector_type vector_type;
const int Nsimd = vobj::vector_type::Nsimd();
ExtractBuffer<Integer> mask(Nsimd);
ExtractBuffer<scalar_object> truevals(Nsimd);
ExtractBuffer<scalar_object> falsevals(Nsimd);
extract(iftrue, truevals);
extract(iffalse, falsevals);
extract<vInteger, Integer>(TensorRemove(predicate), mask);
for (int s = 0; s < Nsimd; s++) {
if (mask[s]) falsevals[s] = truevals[s];
}
merge(ret, falsevals);
return ret;
}
*/
////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////
// compare lattice to lattice // compare lattice to lattice
////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////

View File

@ -182,6 +182,14 @@ inline void peekLocalSite(sobj &s,const LatticeView<vobj> &l,Coordinate &site)
return; return;
}; };
template<class vobj,class sobj>
inline void peekLocalSite(sobj &s,const Lattice<vobj> &l,Coordinate &site)
{
autoView(lv,l,CpuRead);
peekLocalSite(s,lv,site);
return;
};
// Must be CPU write view // Must be CPU write view
template<class vobj,class sobj> template<class vobj,class sobj>
inline void pokeLocalSite(const sobj &s,LatticeView<vobj> &l,Coordinate &site) inline void pokeLocalSite(const sobj &s,LatticeView<vobj> &l,Coordinate &site)
@ -210,6 +218,14 @@ inline void pokeLocalSite(const sobj &s,LatticeView<vobj> &l,Coordinate &site)
return; return;
}; };
template<class vobj,class sobj>
inline void pokeLocalSite(const sobj &s, Lattice<vobj> &l,Coordinate &site)
{
autoView(lv,l,CpuWrite);
pokeLocalSite(s,lv,site);
return;
};
NAMESPACE_END(Grid); NAMESPACE_END(Grid);
#endif #endif

View File

@ -0,0 +1,79 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/lattice/Lattice_reality.h
Copyright (C) 2015
Author: Azusa Yamaguchi <ayamaguc@staffmail.ed.ac.uk>
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: neo <cossu@post.kek.jp>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#ifndef GRID_LATTICE_REAL_IMAG_H
#define GRID_LATTICE_REAL_IMAG_H
// FIXME .. this is the sector of the code
// I am most worried about the directions
// The choice of burying complex in the SIMD
// is making the use of "real" and "imag" very cumbersome
NAMESPACE_BEGIN(Grid);
template<class vobj> inline Lattice<vobj> real(const Lattice<vobj> &lhs){
Lattice<vobj> ret(lhs.Grid());
autoView( lhs_v, lhs, AcceleratorRead);
autoView( ret_v, ret, AcceleratorWrite);
ret.Checkerboard()=lhs.Checkerboard();
accelerator_for( ss, lhs_v.size(), 1, {
ret_v[ss] =real(lhs_v[ss]);
});
return ret;
};
template<class vobj> inline Lattice<vobj> imag(const Lattice<vobj> &lhs){
Lattice<vobj> ret(lhs.Grid());
autoView( lhs_v, lhs, AcceleratorRead);
autoView( ret_v, ret, AcceleratorWrite);
ret.Checkerboard()=lhs.Checkerboard();
accelerator_for( ss, lhs_v.size(), 1, {
ret_v[ss] =imag(lhs_v[ss]);
});
return ret;
};
template<class Expression,typename std::enable_if<is_lattice_expr<Expression>::value,void>::type * = nullptr>
auto real(const Expression &expr) -> decltype(real(closure(expr)))
{
return real(closure(expr));
}
template<class Expression,typename std::enable_if<is_lattice_expr<Expression>::value,void>::type * = nullptr>
auto imag(const Expression &expr) -> decltype(imag(closure(expr)))
{
return imag(closure(expr));
}
NAMESPACE_END(Grid);
#endif

View File

@ -240,6 +240,8 @@ template<class vobj,class vobj2,class CComplex>
autoView( fineX_ , fineX, AcceleratorRead); autoView( fineX_ , fineX, AcceleratorRead);
autoView( fineY_ , fineY, AcceleratorRead); autoView( fineY_ , fineY, AcceleratorRead);
autoView( coarseA_, coarseA, AcceleratorRead); autoView( coarseA_, coarseA, AcceleratorRead);
Coordinate fine_rdimensions = fine->_rdimensions;
Coordinate coarse_rdimensions = coarse->_rdimensions;
accelerator_for(sf, fine->oSites(), CComplex::Nsimd(), { accelerator_for(sf, fine->oSites(), CComplex::Nsimd(), {
@ -247,9 +249,9 @@ template<class vobj,class vobj2,class CComplex>
Coordinate coor_c(_ndimension); Coordinate coor_c(_ndimension);
Coordinate coor_f(_ndimension); Coordinate coor_f(_ndimension);
Lexicographic::CoorFromIndex(coor_f,sf,fine->_rdimensions); Lexicographic::CoorFromIndex(coor_f,sf,fine_rdimensions);
for(int d=0;d<_ndimension;d++) coor_c[d]=coor_f[d]/block_r[d]; for(int d=0;d<_ndimension;d++) coor_c[d]=coor_f[d]/block_r[d];
Lexicographic::IndexFromCoor(coor_c,sc,coarse->_rdimensions); Lexicographic::IndexFromCoor(coor_c,sc,coarse_rdimensions);
// z = A x + y // z = A x + y
#ifdef GRID_SIMT #ifdef GRID_SIMT
@ -353,11 +355,14 @@ inline void blockSum(Lattice<vobj> &coarseData,const Lattice<vobj> &fineData)
autoView( coarseData_ , coarseData, AcceleratorWrite); autoView( coarseData_ , coarseData, AcceleratorWrite);
autoView( fineData_ , fineData, AcceleratorRead); autoView( fineData_ , fineData, AcceleratorRead);
Coordinate fine_rdimensions = fine->_rdimensions;
Coordinate coarse_rdimensions = coarse->_rdimensions;
accelerator_for(sc,coarse->oSites(),1,{ accelerator_for(sc,coarse->oSites(),1,{
// One thread per sub block // One thread per sub block
Coordinate coor_c(_ndimension); Coordinate coor_c(_ndimension);
Lexicographic::CoorFromIndex(coor_c,sc,coarse->_rdimensions); // Block coordinate Lexicographic::CoorFromIndex(coor_c,sc,coarse_rdimensions); // Block coordinate
coarseData_[sc]=Zero(); coarseData_[sc]=Zero();
for(int sb=0;sb<blockVol;sb++){ for(int sb=0;sb<blockVol;sb++){
@ -367,7 +372,7 @@ inline void blockSum(Lattice<vobj> &coarseData,const Lattice<vobj> &fineData)
Coordinate coor_f(_ndimension); Coordinate coor_f(_ndimension);
Lexicographic::CoorFromIndex(coor_b,sb,block_r); // Block sub coordinate Lexicographic::CoorFromIndex(coor_b,sb,block_r); // Block sub coordinate
for(int d=0;d<_ndimension;d++) coor_f[d]=coor_c[d]*block_r[d] + coor_b[d]; for(int d=0;d<_ndimension;d++) coor_f[d]=coor_c[d]*block_r[d] + coor_b[d];
Lexicographic::IndexFromCoor(coor_f,sf,fine->_rdimensions); Lexicographic::IndexFromCoor(coor_f,sf,fine_rdimensions);
coarseData_[sc]=coarseData_[sc]+fineData_[sf]; coarseData_[sc]=coarseData_[sc]+fineData_[sf];
} }

View File

@ -52,6 +52,7 @@ public:
// This will be safe to call from accelerator_for and is trivially copy constructible // This will be safe to call from accelerator_for and is trivially copy constructible
// The copy constructor for this will need to be used by device lambda functions // The copy constructor for this will need to be used by device lambda functions
///////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////
#undef LATTICE_BOUNDS_CHECK
template<class vobj> template<class vobj>
class LatticeView : public LatticeAccelerator<vobj> class LatticeView : public LatticeAccelerator<vobj>
{ {
@ -61,14 +62,36 @@ public:
void * cpu_ptr; void * cpu_ptr;
#ifdef GRID_SIMT #ifdef GRID_SIMT
accelerator_inline const typename vobj::scalar_object operator()(size_t i) const { accelerator_inline const typename vobj::scalar_object operator()(size_t i) const {
#ifdef LATTICE_BOUNDS_CHECK
assert(i<this->_odata_size);
assert(i>=0);
#endif
return coalescedRead(this->_odata[i]); return coalescedRead(this->_odata[i]);
} }
#else #else
accelerator_inline const vobj & operator()(size_t i) const { return this->_odata[i]; } accelerator_inline const vobj & operator()(size_t i) const {
#ifdef LATTICE_BOUNDS_CHECK
assert(i<this->_odata_size);
assert(i>=0);
#endif
return this->_odata[i];
}
#endif #endif
accelerator_inline const vobj & operator[](size_t i) const { return this->_odata[i]; }; accelerator_inline const vobj & operator[](size_t i) const {
accelerator_inline vobj & operator[](size_t i) { return this->_odata[i]; }; #ifdef LATTICE_BOUNDS_CHECK
assert(i<this->_odata_size);
assert(i>=0);
#endif
return this->_odata[i];
};
accelerator_inline vobj & operator[](size_t i) {
#ifdef LATTICE_BOUNDS_CHECK
assert(i<this->_odata_size);
assert(i>=0);
#endif
return this->_odata[i];
};
accelerator_inline uint64_t begin(void) const { return 0;}; accelerator_inline uint64_t begin(void) const { return 0;};
accelerator_inline uint64_t end(void) const { return this->_odata_size; }; accelerator_inline uint64_t end(void) const { return this->_odata_size; };

View File

@ -77,9 +77,16 @@ const int SpinorIndex = 2;
template<typename T> struct isSpinor { template<typename T> struct isSpinor {
static constexpr bool value = (SpinorIndex==T::TensorLevel); static constexpr bool value = (SpinorIndex==T::TensorLevel);
}; };
const int CoarseIndex = 4;
template<typename T> struct isCoarsened {
static constexpr bool value = (CoarseIndex<=T::TensorLevel);
};
template <typename T> using IfSpinor = Invoke<std::enable_if< isSpinor<T>::value,int> > ; template <typename T> using IfSpinor = Invoke<std::enable_if< isSpinor<T>::value,int> > ;
template <typename T> using IfNotSpinor = Invoke<std::enable_if<!isSpinor<T>::value,int> > ; template <typename T> using IfNotSpinor = Invoke<std::enable_if<!isSpinor<T>::value,int> > ;
template <typename T> using IfCoarsened = Invoke<std::enable_if< isCoarsened<T>::value,int> > ;
template <typename T> using IfNotCoarsened = Invoke<std::enable_if<!isCoarsened<T>::value,int> > ;
// ChrisK very keen to add extra space for Gparity doubling. // ChrisK very keen to add extra space for Gparity doubling.
// //
// Also add domain wall index, in a way where Wilson operator // Also add domain wall index, in a way where Wilson operator

View File

@ -89,7 +89,8 @@ public:
virtual void Mdiag (const FermionField &in, FermionField &out) { Mooee(in,out);}; // Same as Mooee applied to both CB's virtual void Mdiag (const FermionField &in, FermionField &out) { Mooee(in,out);}; // Same as Mooee applied to both CB's
virtual void Mdir (const FermionField &in, FermionField &out,int dir,int disp)=0; // case by case Wilson, Clover, Cayley, ContFrac, PartFrac virtual void Mdir (const FermionField &in, FermionField &out,int dir,int disp)=0; // case by case Wilson, Clover, Cayley, ContFrac, PartFrac
virtual void MdirAll(const FermionField &in, std::vector<FermionField> &out)=0; // case by case Wilson, Clover, Cayley, ContFrac, PartFrac virtual void MdirAll(const FermionField &in, std::vector<FermionField> &out)=0; // case by case Wilson, Clover, Cayley, ContFrac, PartFrac
virtual std::vector<int> Directions(void) =0;
virtual std::vector<int> Displacements(void)=0;
virtual void MomentumSpacePropagator(FermionField &out,const FermionField &in,RealD _m,std::vector<double> twist) { assert(0);}; virtual void MomentumSpacePropagator(FermionField &out,const FermionField &in,RealD _m,std::vector<double> twist) { assert(0);};

View File

@ -44,6 +44,9 @@ public:
INHERIT_IMPL_TYPES(Impl); INHERIT_IMPL_TYPES(Impl);
typedef StaggeredKernels<Impl> Kernels; typedef StaggeredKernels<Impl> Kernels;
virtual std::vector<int> Directions(void) { return this->directions; };
virtual std::vector<int> Displacements(void){ return this->displacements;};
FermionField _tmp; FermionField _tmp;
FermionField &tmp(void) { return _tmp; } FermionField &tmp(void) { return _tmp; }

View File

@ -49,6 +49,9 @@ public:
INHERIT_IMPL_TYPES(Impl); INHERIT_IMPL_TYPES(Impl);
typedef StaggeredKernels<Impl> Kernels; typedef StaggeredKernels<Impl> Kernels;
virtual std::vector<int> Directions(void) { return this->directions; };
virtual std::vector<int> Displacements(void){ return this->displacements;};
FermionField _tmp; FermionField _tmp;
FermionField &tmp(void) { return _tmp; } FermionField &tmp(void) { return _tmp; }
@ -208,7 +211,7 @@ public:
LebesgueOrder LebesgueEvenOdd; LebesgueOrder LebesgueEvenOdd;
// Comms buffer // Comms buffer
std::vector<SiteHalfSpinor,alignedAllocator<SiteHalfSpinor> > comm_buf; // std::vector<SiteHalfSpinor,alignedAllocator<SiteHalfSpinor> > comm_buf;
/////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////
// Conserved current utilities // Conserved current utilities

View File

@ -47,6 +47,9 @@ public:
FermionField _tmp; FermionField _tmp;
FermionField &tmp(void) { return _tmp; } FermionField &tmp(void) { return _tmp; }
virtual std::vector<int> Directions(void) { return this->directions; };
virtual std::vector<int> Displacements(void){ return this->displacements;};
//////////////////////////////////////// ////////////////////////////////////////
// Performance monitoring // Performance monitoring
//////////////////////////////////////// ////////////////////////////////////////

View File

@ -63,6 +63,9 @@ public:
INHERIT_IMPL_TYPES(Impl); INHERIT_IMPL_TYPES(Impl);
typedef WilsonKernels<Impl> Kernels; typedef WilsonKernels<Impl> Kernels;
virtual std::vector<int> Directions(void) { return this->directions; };
virtual std::vector<int> Displacements(void){ return this->displacements;};
/////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////
// Implement the abstract base // Implement the abstract base
/////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////
@ -74,6 +77,20 @@ public:
FermionField _tmp; FermionField _tmp;
FermionField &tmp(void) { return _tmp; } FermionField &tmp(void) { return _tmp; }
void Report(void);
void ZeroCounters(void);
double DhopCalls;
double DhopCommTime;
double DhopComputeTime;
double DhopComputeTime2;
double DhopFaceTime;
double DhopTotalTime;
double DerivCalls;
double DerivCommTime;
double DerivComputeTime;
double DerivDhopComputeTime;
////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////
// override multiply; cut number routines if pass dagger argument // override multiply; cut number routines if pass dagger argument
// and also make interface more uniformly consistent // and also make interface more uniformly consistent
@ -196,5 +213,3 @@ typedef WilsonFermion<WilsonImplF> WilsonFermionF;
typedef WilsonFermion<WilsonImplD> WilsonFermionD; typedef WilsonFermion<WilsonImplD> WilsonFermionD;
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -72,6 +72,9 @@ public:
typedef WilsonKernels<Impl> Kernels; typedef WilsonKernels<Impl> Kernels;
PmuStat stat; PmuStat stat;
virtual std::vector<int> Directions(void) { return this->directions; };
virtual std::vector<int> Displacements(void){ return this->displacements;};
FermionField _tmp; FermionField _tmp;
FermionField &tmp(void) { return _tmp; } FermionField &tmp(void) { return _tmp; }
@ -215,7 +218,7 @@ public:
LebesgueOrder LebesgueEvenOdd; LebesgueOrder LebesgueEvenOdd;
// Comms buffer // Comms buffer
std::vector<SiteHalfSpinor,alignedAllocator<SiteHalfSpinor> > comm_buf; // std::vector<SiteHalfSpinor,alignedAllocator<SiteHalfSpinor> > comm_buf;
}; };

View File

@ -79,6 +79,8 @@ public:
_Mat.M(in,tmp); _Mat.M(in,tmp);
G5R5(out,tmp); G5R5(out,tmp);
} }
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
}; };
@ -127,6 +129,8 @@ public:
_Mat.M(in,tmp); _Mat.M(in,tmp);
out=g5*tmp; out=g5*tmp;
} }
virtual std::vector<int> Directions(void) { return _Mat.Directions();};
virtual std::vector<int> Displacements(void){ return _Mat.Displacements();};
}; };
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -133,14 +133,14 @@ void WilsonCloverFermion<Impl>::ImportGauge(const GaugeField &_Umu)
pickCheckerboard(Even, CloverTermEven, CloverTerm); pickCheckerboard(Even, CloverTermEven, CloverTerm);
pickCheckerboard(Odd, CloverTermOdd, CloverTerm); pickCheckerboard(Odd, CloverTermOdd, CloverTerm);
pickCheckerboard(Even, CloverTermDagEven, closure(adj(CloverTerm))); pickCheckerboard(Even, CloverTermDagEven, adj(CloverTerm));
pickCheckerboard(Odd, CloverTermDagOdd, closure(adj(CloverTerm))); pickCheckerboard(Odd, CloverTermDagOdd, adj(CloverTerm));
pickCheckerboard(Even, CloverTermInvEven, CloverTermInv); pickCheckerboard(Even, CloverTermInvEven, CloverTermInv);
pickCheckerboard(Odd, CloverTermInvOdd, CloverTermInv); pickCheckerboard(Odd, CloverTermInvOdd, CloverTermInv);
pickCheckerboard(Even, CloverTermInvDagEven, closure(adj(CloverTermInv))); pickCheckerboard(Even, CloverTermInvDagEven, adj(CloverTermInv));
pickCheckerboard(Odd, CloverTermInvDagOdd, closure(adj(CloverTermInv))); pickCheckerboard(Odd, CloverTermInvDagOdd, adj(CloverTermInv));
} }
template <class Impl> template <class Impl>

View File

@ -75,6 +75,91 @@ WilsonFermion<Impl>::WilsonFermion(GaugeField &_Umu, GridCartesian &Fgrid,
StencilOdd.BuildSurfaceList(1,vol4); StencilOdd.BuildSurfaceList(1,vol4);
} }
template<class Impl>
void WilsonFermion<Impl>::Report(void)
{
RealD NP = _grid->_Nprocessors;
RealD NN = _grid->NodeCount();
RealD volume = 1;
Coordinate latt = _grid->GlobalDimensions();
for(int mu=0;mu<Nd;mu++) volume=volume*latt[mu];
if ( DhopCalls > 0 ) {
std::cout << GridLogMessage << "#### Dhop calls report " << std::endl;
std::cout << GridLogMessage << "WilsonFermion Number of DhopEO Calls : " << DhopCalls << std::endl;
std::cout << GridLogMessage << "WilsonFermion TotalTime /Calls : " << DhopTotalTime / DhopCalls << " us" << std::endl;
std::cout << GridLogMessage << "WilsonFermion CommTime /Calls : " << DhopCommTime / DhopCalls << " us" << std::endl;
std::cout << GridLogMessage << "WilsonFermion FaceTime /Calls : " << DhopFaceTime / DhopCalls << " us" << std::endl;
std::cout << GridLogMessage << "WilsonFermion ComputeTime1/Calls : " << DhopComputeTime / DhopCalls << " us" << std::endl;
std::cout << GridLogMessage << "WilsonFermion ComputeTime2/Calls : " << DhopComputeTime2/ DhopCalls << " us" << std::endl;
// Average the compute time
_grid->GlobalSum(DhopComputeTime);
DhopComputeTime/=NP;
RealD mflops = 1320*volume*DhopCalls/DhopComputeTime/2; // 2 for red black counting
std::cout << GridLogMessage << "Average mflops/s per call : " << mflops << std::endl;
std::cout << GridLogMessage << "Average mflops/s per call per rank : " << mflops/NP << std::endl;
std::cout << GridLogMessage << "Average mflops/s per call per node : " << mflops/NN << std::endl;
RealD Fullmflops = 1320*volume*DhopCalls/(DhopTotalTime)/2; // 2 for red black counting
std::cout << GridLogMessage << "Average mflops/s per call (full) : " << Fullmflops << std::endl;
std::cout << GridLogMessage << "Average mflops/s per call per rank (full): " << Fullmflops/NP << std::endl;
std::cout << GridLogMessage << "Average mflops/s per call per node (full): " << Fullmflops/NN << std::endl;
}
if ( DerivCalls > 0 ) {
std::cout << GridLogMessage << "#### Deriv calls report "<< std::endl;
std::cout << GridLogMessage << "WilsonFermion Number of Deriv Calls : " <<DerivCalls <<std::endl;
std::cout << GridLogMessage << "WilsonFermion CommTime/Calls : " <<DerivCommTime/DerivCalls<<" us" <<std::endl;
std::cout << GridLogMessage << "WilsonFermion ComputeTime/Calls : " <<DerivComputeTime/DerivCalls<<" us" <<std::endl;
std::cout << GridLogMessage << "WilsonFermion Dhop ComputeTime/Calls : " <<DerivDhopComputeTime/DerivCalls<<" us" <<std::endl;
// how to count flops here?
RealD mflops = 144*volume*DerivCalls/DerivDhopComputeTime;
std::cout << GridLogMessage << "Average mflops/s per call ? : " << mflops << std::endl;
std::cout << GridLogMessage << "Average mflops/s per call per node ? : " << mflops/NP << std::endl;
// how to count flops here?
RealD Fullmflops = 144*volume*DerivCalls/(DerivDhopComputeTime+DerivCommTime)/2; // 2 for red black counting
std::cout << GridLogMessage << "Average mflops/s per call (full) ? : " << Fullmflops << std::endl;
std::cout << GridLogMessage << "Average mflops/s per call per node (full) ? : " << Fullmflops/NP << std::endl; }
if (DerivCalls > 0 || DhopCalls > 0){
std::cout << GridLogMessage << "WilsonFermion Stencil" <<std::endl; Stencil.Report();
std::cout << GridLogMessage << "WilsonFermion StencilEven"<<std::endl; StencilEven.Report();
std::cout << GridLogMessage << "WilsonFermion StencilOdd" <<std::endl; StencilOdd.Report();
}
if ( DhopCalls > 0){
std::cout << GridLogMessage << "WilsonFermion Stencil Reporti()" <<std::endl; Stencil.Reporti(DhopCalls);
std::cout << GridLogMessage << "WilsonFermion StencilEven Reporti()"<<std::endl; StencilEven.Reporti(DhopCalls);
std::cout << GridLogMessage << "WilsonFermion StencilOdd Reporti()" <<std::endl; StencilOdd.Reporti(DhopCalls);
}
}
template<class Impl>
void WilsonFermion<Impl>::ZeroCounters(void) {
DhopCalls = 0; // ok
DhopCommTime = 0;
DhopComputeTime = 0;
DhopComputeTime2= 0;
DhopFaceTime = 0;
DhopTotalTime = 0;
DerivCalls = 0; // ok
DerivCommTime = 0;
DerivComputeTime = 0;
DerivDhopComputeTime = 0;
Stencil.ZeroCounters();
StencilEven.ZeroCounters();
StencilOdd.ZeroCounters();
Stencil.ZeroCountersi();
StencilEven.ZeroCountersi();
StencilOdd.ZeroCountersi();
}
template <class Impl> template <class Impl>
void WilsonFermion<Impl>::ImportGauge(const GaugeField &_Umu) void WilsonFermion<Impl>::ImportGauge(const GaugeField &_Umu)
{ {
@ -234,6 +319,7 @@ template <class Impl>
void WilsonFermion<Impl>::DerivInternal(StencilImpl &st, DoubledGaugeField &U, void WilsonFermion<Impl>::DerivInternal(StencilImpl &st, DoubledGaugeField &U,
GaugeField &mat, const FermionField &A, GaugeField &mat, const FermionField &A,
const FermionField &B, int dag) { const FermionField &B, int dag) {
DerivCalls++;
assert((dag == DaggerNo) || (dag == DaggerYes)); assert((dag == DaggerNo) || (dag == DaggerYes));
Compressor compressor(dag); Compressor compressor(dag);
@ -242,8 +328,11 @@ void WilsonFermion<Impl>::DerivInternal(StencilImpl &st, DoubledGaugeField &U,
FermionField Atilde(B.Grid()); FermionField Atilde(B.Grid());
Atilde = A; Atilde = A;
DerivCommTime-=usecond();
st.HaloExchange(B, compressor); st.HaloExchange(B, compressor);
DerivCommTime+=usecond();
DerivComputeTime-=usecond();
for (int mu = 0; mu < Nd; mu++) { for (int mu = 0; mu < Nd; mu++) {
//////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////
// Flip gamma (1+g)<->(1-g) if dag // Flip gamma (1+g)<->(1-g) if dag
@ -251,6 +340,7 @@ void WilsonFermion<Impl>::DerivInternal(StencilImpl &st, DoubledGaugeField &U,
int gamma = mu; int gamma = mu;
if (!dag) gamma += Nd; if (!dag) gamma += Nd;
DerivDhopComputeTime -= usecond();
int Ls=1; int Ls=1;
Kernels::DhopDirKernel(st, U, st.CommBuf(), Ls, B.Grid()->oSites(), B, Btilde, mu, gamma); Kernels::DhopDirKernel(st, U, st.CommBuf(), Ls, B.Grid()->oSites(), B, Btilde, mu, gamma);
@ -258,7 +348,9 @@ void WilsonFermion<Impl>::DerivInternal(StencilImpl &st, DoubledGaugeField &U,
// spin trace outer product // spin trace outer product
////////////////////////////////////////////////// //////////////////////////////////////////////////
Impl::InsertForce4D(mat, Btilde, Atilde, mu); Impl::InsertForce4D(mat, Btilde, Atilde, mu);
DerivDhopComputeTime += usecond();
} }
DerivComputeTime += usecond();
} }
template <class Impl> template <class Impl>
@ -392,13 +484,14 @@ void WilsonFermion<Impl>::DhopInternal(StencilImpl &st, LebesgueOrder &lo,
const FermionField &in, const FermionField &in,
FermionField &out, int dag) FermionField &out, int dag)
{ {
DhopTotalTime-=usecond();
#ifdef GRID_OMP #ifdef GRID_OMP
if ( WilsonKernelsStatic::Comms == WilsonKernelsStatic::CommsAndCompute ) if ( WilsonKernelsStatic::Comms == WilsonKernelsStatic::CommsAndCompute )
DhopInternalOverlappedComms(st,lo,U,in,out,dag); DhopInternalOverlappedComms(st,lo,U,in,out,dag);
else else
#endif #endif
DhopInternalSerial(st,lo,U,in,out,dag); DhopInternalSerial(st,lo,U,in,out,dag);
DhopTotalTime+=usecond();
} }
template <class Impl> template <class Impl>
@ -417,38 +510,53 @@ void WilsonFermion<Impl>::DhopInternalOverlappedComms(StencilImpl &st, LebesgueO
///////////////////////////// /////////////////////////////
std::vector<std::vector<CommsRequest_t> > requests; std::vector<std::vector<CommsRequest_t> > requests;
st.Prepare(); st.Prepare();
DhopFaceTime-=usecond();
st.HaloGather(in,compressor); st.HaloGather(in,compressor);
DhopFaceTime+=usecond();
DhopCommTime -=usecond();
st.CommunicateBegin(requests); st.CommunicateBegin(requests);
///////////////////////////// /////////////////////////////
// Overlap with comms // Overlap with comms
///////////////////////////// /////////////////////////////
DhopFaceTime-=usecond();
st.CommsMergeSHM(compressor); st.CommsMergeSHM(compressor);
DhopFaceTime+=usecond();
///////////////////////////// /////////////////////////////
// do the compute interior // do the compute interior
///////////////////////////// /////////////////////////////
int Opt = WilsonKernelsStatic::Opt; int Opt = WilsonKernelsStatic::Opt;
DhopComputeTime-=usecond();
if (dag == DaggerYes) { if (dag == DaggerYes) {
Kernels::DhopDagKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out,1,0); Kernels::DhopDagKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out,1,0);
} else { } else {
Kernels::DhopKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out,1,0); Kernels::DhopKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out,1,0);
} }
DhopComputeTime+=usecond();
///////////////////////////// /////////////////////////////
// Complete comms // Complete comms
///////////////////////////// /////////////////////////////
st.CommunicateComplete(requests); st.CommunicateComplete(requests);
DhopCommTime +=usecond();
DhopFaceTime-=usecond();
st.CommsMerge(compressor); st.CommsMerge(compressor);
DhopFaceTime+=usecond();
///////////////////////////// /////////////////////////////
// do the compute exterior // do the compute exterior
///////////////////////////// /////////////////////////////
DhopComputeTime2-=usecond();
if (dag == DaggerYes) { if (dag == DaggerYes) {
Kernels::DhopDagKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out,0,1); Kernels::DhopDagKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out,0,1);
} else { } else {
Kernels::DhopKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out,0,1); Kernels::DhopKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out,0,1);
} }
DhopComputeTime2+=usecond();
}; };
@ -460,14 +568,18 @@ void WilsonFermion<Impl>::DhopInternalSerial(StencilImpl &st, LebesgueOrder &lo,
{ {
assert((dag == DaggerNo) || (dag == DaggerYes)); assert((dag == DaggerNo) || (dag == DaggerYes));
Compressor compressor(dag); Compressor compressor(dag);
DhopCommTime-=usecond();
st.HaloExchange(in, compressor); st.HaloExchange(in, compressor);
DhopCommTime+=usecond();
DhopComputeTime-=usecond();
int Opt = WilsonKernelsStatic::Opt; int Opt = WilsonKernelsStatic::Opt;
if (dag == DaggerYes) { if (dag == DaggerYes) {
Kernels::DhopDagKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out); Kernels::DhopDagKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out);
} else { } else {
Kernels::DhopKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out); Kernels::DhopKernel(Opt,st,U,st.CommBuf(),1,U.oSites(),in,out);
} }
DhopComputeTime+=usecond();
}; };
/*Change ends */ /*Change ends */

View File

@ -0,0 +1,574 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernelsAsmA64FX.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#pragma once
//#if defined(A64FXASM)
#if defined(A64FX)
// safety include
#include <arm_sve.h>
// undefine everything related to kernels
#include <simd/Fujitsu_A64FX_undef.h>
// enable A64FX body
#define WILSONKERNELSASMBODYA64FX
//#pragma message("A64FX Dslash: WilsonKernelsAsmBodyA64FX.h")
///////////////////////////////////////////////////////////
// If we are A64FX specialise the single precision routine
///////////////////////////////////////////////////////////
#if defined(DSLASHINTRIN)
//#pragma message ("A64FX Dslash: intrin")
#include <simd/Fujitsu_A64FX_intrin_single.h>
#else
#pragma message ("A64FX Dslash: asm")
#include <simd/Fujitsu_A64FX_asm_single.h>
#endif
/// Switch off the 5d vectorised code optimisations
#undef DWFVEC5D
/////////////////////////////////////////////////////////////////
// XYZT vectorised, undag Kernel, single
/////////////////////////////////////////////////////////////////
#undef KERNEL_DAG
#define INTERIOR_AND_EXTERIOR
#undef INTERIOR
#undef EXTERIOR
template<> void
WilsonKernels<WilsonImplF>::AsmDhopSite(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplF>::AsmDhopSite(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplFH>::AsmDhopSite(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplFH>::AsmDhopSite(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
#undef INTERIOR_AND_EXTERIOR
#define INTERIOR
#undef EXTERIOR
template<> void
WilsonKernels<WilsonImplF>::AsmDhopSiteInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplF>::AsmDhopSiteInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplFH>::AsmDhopSiteInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplFH>::AsmDhopSiteInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
#undef INTERIOR_AND_EXTERIOR
#undef INTERIOR
#define EXTERIOR
template<> void
WilsonKernels<WilsonImplF>::AsmDhopSiteExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplF>::AsmDhopSiteExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplFH>::AsmDhopSiteExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplFH>::AsmDhopSiteExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
/////////////////////////////////////////////////////////////////
// XYZT vectorised, dag Kernel, single
/////////////////////////////////////////////////////////////////
#define KERNEL_DAG
#define INTERIOR_AND_EXTERIOR
#undef INTERIOR
#undef EXTERIOR
template<> void
WilsonKernels<WilsonImplF>::AsmDhopSiteDag(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplF>::AsmDhopSiteDag(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplFH>::AsmDhopSiteDag(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplFH>::AsmDhopSiteDag(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
#undef INTERIOR_AND_EXTERIOR
#define INTERIOR
#undef EXTERIOR
template<> void
WilsonKernels<WilsonImplF>::AsmDhopSiteDagInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplF>::AsmDhopSiteDagInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplFH>::AsmDhopSiteDagInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplFH>::AsmDhopSiteDagInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
#undef INTERIOR_AND_EXTERIOR
#undef INTERIOR
#define EXTERIOR
template<> void
WilsonKernels<WilsonImplF>::AsmDhopSiteDagExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplF>::AsmDhopSiteDagExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplFH>::AsmDhopSiteDagExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplFH>::AsmDhopSiteDagExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
// undefine
#include <simd/Fujitsu_A64FX_undef.h>
///////////////////////////////////////////////////////////
// If we are A64FX specialise the double precision routine
///////////////////////////////////////////////////////////
#if defined(DSLASHINTRIN)
#include <simd/Fujitsu_A64FX_intrin_double.h>
#else
#include <simd/Fujitsu_A64FX_asm_double.h>
#endif
// former KNL
//#define MAYBEPERM(A,perm) if (perm) { A ; }
//#define MULT_2SPIN(ptr,pf) MULT_ADDSUB_2SPIN(ptr,pf)
//#define COMPLEX_SIGNS(isigns) vComplexD *isigns = &signsD[0];
#define INTERIOR_AND_EXTERIOR
#undef INTERIOR
#undef EXTERIOR
/////////////////////////////////////////////////////////////////
// XYZT vectorised, undag Kernel, double
/////////////////////////////////////////////////////////////////
#undef KERNEL_DAG
#define INTERIOR_AND_EXTERIOR
#undef INTERIOR
#undef EXTERIOR
template<> void
WilsonKernels<WilsonImplD>::AsmDhopSite(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplD>::AsmDhopSite(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplDF>::AsmDhopSite(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplDF>::AsmDhopSite(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
#undef INTERIOR_AND_EXTERIOR
#define INTERIOR
#undef EXTERIOR
template<> void
WilsonKernels<WilsonImplD>::AsmDhopSiteInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplD>::AsmDhopSiteInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplDF>::AsmDhopSiteInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplDF>::AsmDhopSiteInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
#undef INTERIOR_AND_EXTERIOR
#undef INTERIOR
#define EXTERIOR
template<> void
WilsonKernels<WilsonImplD>::AsmDhopSiteExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplD>::AsmDhopSiteExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplDF>::AsmDhopSiteExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplDF>::AsmDhopSiteExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
/////////////////////////////////////////////////////////////////
// XYZT vectorised, dag Kernel, double
/////////////////////////////////////////////////////////////////
#define KERNEL_DAG
#define INTERIOR_AND_EXTERIOR
#undef INTERIOR
#undef EXTERIOR
template<> void
WilsonKernels<WilsonImplD>::AsmDhopSiteDag(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplD>::AsmDhopSiteDag(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplDF>::AsmDhopSiteDag(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplDF>::AsmDhopSiteDag(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
#undef INTERIOR_AND_EXTERIOR
#define INTERIOR
#undef EXTERIOR
template<> void
WilsonKernels<WilsonImplD>::AsmDhopSiteDagInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplD>::AsmDhopSiteDagInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplDF>::AsmDhopSiteDagInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplDF>::AsmDhopSiteDagInt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
#undef INTERIOR_AND_EXTERIOR
#undef INTERIOR
#define EXTERIOR
template<> void
WilsonKernels<WilsonImplD>::AsmDhopSiteDagExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplD>::AsmDhopSiteDagExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<WilsonImplDF>::AsmDhopSiteDagExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
template<> void
WilsonKernels<ZWilsonImplDF>::AsmDhopSiteDagExt(StencilView &st, DoubledGaugeFieldView &U, SiteHalfSpinor *buf,
int ss,int ssU,int Ls,int Ns,const FermionFieldView &in, FermionFieldView &out)
#if defined (WILSONKERNELSASMBODYA64FX)
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBodyA64FX.h>
#else
#include <qcd/action/fermion/implementation/WilsonKernelsAsmBody.h>
#endif
// undefs
#undef WILSONKERNELSASMBODYA64FX
#include <simd/Fujitsu_A64FX_undef.h>
#endif //A64FXASM

View File

@ -0,0 +1,380 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: WilsonKernelsAsmBodyA64FX.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#ifdef KERNEL_DAG
#define DIR0_PROJ XP_PROJ
#define DIR1_PROJ YP_PROJ
#define DIR2_PROJ ZP_PROJ
#define DIR3_PROJ TP_PROJ
#define DIR4_PROJ XM_PROJ
#define DIR5_PROJ YM_PROJ
#define DIR6_PROJ ZM_PROJ
#define DIR7_PROJ TM_PROJ
#define DIR0_RECON XP_RECON
#define DIR1_RECON YP_RECON_ACCUM
#define DIR2_RECON ZP_RECON_ACCUM
#define DIR3_RECON TP_RECON_ACCUM
#define DIR4_RECON XM_RECON_ACCUM
#define DIR5_RECON YM_RECON_ACCUM
#define DIR6_RECON ZM_RECON_ACCUM
#define DIR7_RECON TM_RECON_ACCUM
#else
#define DIR0_PROJ XM_PROJ
#define DIR1_PROJ YM_PROJ
#define DIR2_PROJ ZM_PROJ
#define DIR3_PROJ TM_PROJ
#define DIR4_PROJ XP_PROJ
#define DIR5_PROJ YP_PROJ
#define DIR6_PROJ ZP_PROJ
#define DIR7_PROJ TP_PROJ
#define DIR0_RECON XM_RECON
#define DIR1_RECON YM_RECON_ACCUM
#define DIR2_RECON ZM_RECON_ACCUM
#define DIR3_RECON TM_RECON_ACCUM
#define DIR4_RECON XP_RECON_ACCUM
#define DIR5_RECON YP_RECON_ACCUM
#define DIR6_RECON ZP_RECON_ACCUM
#define DIR7_RECON TP_RECON_ACCUM
#endif
//using namespace std;
#undef SHOW
//#define SHOW
#undef WHERE
#ifdef INTERIOR_AND_EXTERIOR
#define WHERE "INT_AND_EXT"
#endif
#ifdef INTERIOR
#define WHERE "INT"
#endif
#ifdef EXTERIOR
#define WHERE "EXT"
#endif
//#pragma message("here")
////////////////////////////////////////////////////////////////////////////////
// Comms then compute kernel
////////////////////////////////////////////////////////////////////////////////
#ifdef INTERIOR_AND_EXTERIOR
#define ASM_LEG(Dir,NxtDir,PERMUTE_DIR,PROJ,RECON) \
basep = st.GetPFInfo(nent,plocal); nent++; \
if ( local ) { \
LOAD_CHIMU(base); \
LOAD_TABLE(PERMUTE_DIR); \
PROJ; \
MAYBEPERM(PERMUTE_DIR,perm); \
} else { \
LOAD_CHI(base); \
} \
base = st.GetInfo(ptype,local,perm,NxtDir,ent,plocal); ent++; \
MULT_2SPIN_1(Dir); \
PREFETCH_CHIMU(base); \
PREFETCH_CHIMU_L2(basep); \
/* PREFETCH_GAUGE_L1(NxtDir); */ \
MULT_2SPIN_2; \
if (s == 0) { \
if ((Dir == 0) || (Dir == 4)) { PREFETCH_GAUGE_L2(Dir); } \
} \
RECON; \
#define ASM_LEG_XP(Dir,NxtDir,PERMUTE_DIR,PROJ,RECON) \
base = st.GetInfo(ptype,local,perm,Dir,ent,plocal); ent++; \
PREFETCH1_CHIMU(base); \
ASM_LEG(Dir,NxtDir,PERMUTE_DIR,PROJ,RECON)
#define RESULT(base,basep) SAVE_RESULT(base,basep);
#endif
////////////////////////////////////////////////////////////////////////////////
// Pre comms kernel -- prefetch like normal because it is mostly right
////////////////////////////////////////////////////////////////////////////////
#ifdef INTERIOR
#define ASM_LEG(Dir,NxtDir,PERMUTE_DIR,PROJ,RECON) \
basep = st.GetPFInfo(nent,plocal); nent++; \
if ( local ) { \
LOAD_CHIMU(base); \
LOAD_TABLE(PERMUTE_DIR); \
PROJ; \
MAYBEPERM(PERMUTE_DIR,perm); \
}else if ( st.same_node[Dir] ) {LOAD_CHI(base);} \
base = st.GetInfo(ptype,local,perm,NxtDir,ent,plocal); ent++; \
if ( local || st.same_node[Dir] ) { \
MULT_2SPIN_1(Dir); \
PREFETCH_CHIMU(base); \
/* PREFETCH_GAUGE_L1(NxtDir); */ \
MULT_2SPIN_2; \
if (s == 0) { \
if ((Dir == 0) || (Dir == 4)) { PREFETCH_GAUGE_L2(Dir); } \
} \
RECON; \
PREFETCH_CHIMU_L2(basep); \
} else { PREFETCH_CHIMU(base); } \
#define ASM_LEG_XP(Dir,NxtDir,PERMUTE_DIR,PROJ,RECON) \
base = st.GetInfo(ptype,local,perm,Dir,ent,plocal); ent++; \
PREFETCH1_CHIMU(base); \
ASM_LEG(Dir,NxtDir,PERMUTE_DIR,PROJ,RECON)
#define RESULT(base,basep) SAVE_RESULT(base,basep);
#endif
////////////////////////////////////////////////////////////////////////////////
// Post comms kernel
////////////////////////////////////////////////////////////////////////////////
#ifdef EXTERIOR
#define ASM_LEG(Dir,NxtDir,PERMUTE_DIR,PROJ,RECON) \
base = st.GetInfo(ptype,local,perm,Dir,ent,plocal); ent++; \
if((!local)&&(!st.same_node[Dir]) ) { \
LOAD_CHI(base); \
MULT_2SPIN_1(Dir); \
PREFETCH_CHIMU(base); \
/* PREFETCH_GAUGE_L1(NxtDir); */ \
MULT_2SPIN_2; \
if (s == 0) { \
if ((Dir == 0) || (Dir == 4)) { PREFETCH_GAUGE_L2(Dir); } \
} \
RECON; \
nmu++; \
}
#define ASM_LEG_XP(Dir,NxtDir,PERMUTE_DIR,PROJ,RECON) \
nmu=0; \
base = st.GetInfo(ptype,local,perm,Dir,ent,plocal); ent++;\
if((!local)&&(!st.same_node[Dir]) ) { \
LOAD_CHI(base); \
MULT_2SPIN_1(Dir); \
PREFETCH_CHIMU(base); \
/* PREFETCH_GAUGE_L1(NxtDir); */ \
MULT_2SPIN_2; \
if (s == 0) { \
if ((Dir == 0) || (Dir == 4)) { PREFETCH_GAUGE_L2(Dir); } \
} \
RECON; \
nmu++; \
}
#define RESULT(base,basep) if (nmu){ ADD_RESULT(base,base);}
#endif
{
int nmu;
int local,perm, ptype;
uint64_t base;
uint64_t basep;
const uint64_t plocal =(uint64_t) & in[0];
MASK_REGS;
int nmax=U.oSites();
for(int site=0;site<Ns;site++) {
#ifndef EXTERIOR
// int sU =lo.Reorder(ssU);
int sU =ssU;
int ssn=ssU+1; if(ssn>=nmax) ssn=0;
// int sUn=lo.Reorder(ssn);
int sUn=ssn;
LOCK_GAUGE(0);
#else
int sU =ssU;
int ssn=ssU+1; if(ssn>=nmax) ssn=0;
int sUn=ssn;
#endif
for(int s=0;s<Ls;s++) {
ss =sU*Ls+s;
ssn=sUn*Ls+s;
int ent=ss*8;// 2*Ndim
int nent=ssn*8;
uint64_t delta_base, delta_base_p;
ASM_LEG_XP(Xp,Yp,PERMUTE_DIR3,DIR0_PROJ,DIR0_RECON);
#ifdef SHOW
float rescale = 64. * 12.;
std::cout << "=================================================================" << std::endl;
std::cout << "ss = " << ss << " ssn = " << ssn << std::endl;
std::cout << "sU = " << sU << " ssU = " << ssU << std::endl;
std::cout << " " << std::endl;
std::cout << "Dir = " << Xp << " " << WHERE<< std::endl;
std::cout << "ent nent local perm = " << ent << " " << nent << " " << local << " " << perm << std::endl;
std::cout << "st.same_node[Dir] = " << st.same_node[Xp] << std::endl;
std::cout << "base = " << (base - plocal)/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
ASM_LEG(Yp,Zp,PERMUTE_DIR2,DIR1_PROJ,DIR1_RECON);
#ifdef SHOW
std::cout << "Dir = " << Yp << " " << WHERE<< std::endl;
std::cout << "ent nent local perm = " << ent << " " << nent << " " << local << " " << perm << std::endl;
std::cout << "st.same_node[Dir] = " << st.same_node[Yp] << std::endl;
std::cout << "base = " << (base - plocal)/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
ASM_LEG(Zp,Tp,PERMUTE_DIR1,DIR2_PROJ,DIR2_RECON);
#ifdef SHOW
std::cout << "Dir = " << Zp << " " << WHERE<< std::endl;
std::cout << "ent nent local perm = " << ent << " " << nent << " " << local << " " << perm << std::endl;
std::cout << "st.same_node[Dir] = " << st.same_node[Zp] << std::endl;
std::cout << "base = " << (base - plocal)/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
ASM_LEG(Tp,Xm,PERMUTE_DIR0,DIR3_PROJ,DIR3_RECON);
#ifdef SHOW
std::cout << "Dir = " << Tp << " " << WHERE<< std::endl;
std::cout << "ent nent local perm = " << ent << " " << nent << " " << local << " " << perm << std::endl;
std::cout << "st.same_node[Dir] = " << st.same_node[Tp] << std::endl;
std::cout << "base = " << (base - plocal)/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
ASM_LEG(Xm,Ym,PERMUTE_DIR3,DIR4_PROJ,DIR4_RECON);
#ifdef SHOW
std::cout << "Dir = " << Xm << " " << WHERE<< std::endl;
std::cout << "ent nent local perm = " << ent << " " << nent << " " << local << " " << perm << std::endl;
std::cout << "st.same_node[Dir] = " << st.same_node[Xm] << std::endl;
std::cout << "base = " << (base - plocal)/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
ASM_LEG(Ym,Zm,PERMUTE_DIR2,DIR5_PROJ,DIR5_RECON);
#ifdef SHOW
std::cout << "Dir = " << Ym << " " << WHERE<< std::endl;
std::cout << "ent nent local perm = " << ent << " " << nent << " " << local << " " << perm << std::endl;
std::cout << "st.same_node[Dir] = " << st.same_node[Ym] << std::endl;
std::cout << "base = " << (base - plocal)/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
ASM_LEG(Zm,Tm,PERMUTE_DIR1,DIR6_PROJ,DIR6_RECON);
#ifdef SHOW
std::cout << "Dir = " << Zm << " " << WHERE<< std::endl;
std::cout << "ent nent local perm = " << ent << " " << nent << " " << local << " " << perm << std::endl;
std::cout << "st.same_node[Dir] = " << st.same_node[Zm] << std::endl;
std::cout << "base = " << (base - plocal)/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
ASM_LEG(Tm,Xp,PERMUTE_DIR0,DIR7_PROJ,DIR7_RECON);
#ifdef SHOW
std::cout << "Dir = " << Tm << " " << WHERE<< std::endl;
std::cout << "ent nent local perm = " << ent << " " << nent << " " << local << " " << perm << std::endl;
std::cout << "st.same_node[Dir] = " << st.same_node[Tm] << std::endl;
std::cout << "base = " << (base - plocal)/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
#ifdef EXTERIOR
if (nmu==0) break;
// if (nmu!=0) std::cout << "EXT "<<sU<<std::endl;
#endif
base = (uint64_t) &out[ss];
basep= st.GetPFInfo(nent,plocal); ent++;
basep = (uint64_t) &out[ssn];
RESULT(base,basep);
#ifdef SHOW
std::cout << "Dir = FINAL " << WHERE<< std::endl;;
base_ss = base;
std::cout << "base = " << (base - (uint64_t) &out[0])/rescale << std::endl;
std::cout << "Basep = " << (basep - plocal)/rescale << std::endl;
//printf("U = %llu\n", (uint64_t)&[sU](Dir));
std::cout << "----------------------------------------------------" << std::endl;
#endif
}
ssU++;
UNLOCK_GAUGE(0);
}
}
#undef DIR0_PROJ
#undef DIR1_PROJ
#undef DIR2_PROJ
#undef DIR3_PROJ
#undef DIR4_PROJ
#undef DIR5_PROJ
#undef DIR6_PROJ
#undef DIR7_PROJ
#undef DIR0_RECON
#undef DIR1_RECON
#undef DIR2_RECON
#undef DIR3_RECON
#undef DIR4_RECON
#undef DIR5_RECON
#undef DIR6_RECON
#undef DIR7_RECON
#undef ASM_LEG
#undef ASM_LEG_XP
#undef RESULT

View File

@ -0,0 +1,943 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernelsHand.cc
Copyright (C) 2015
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: paboyle <paboyle@ph.ed.ac.uk>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#pragma once
#include <Grid/qcd/action/fermion/FermionCore.h>
#undef LOAD_CHIMU
#undef LOAD_CHI
#undef MULT_2SPIN
#undef PERMUTE_DIR
#undef XP_PROJ
#undef YP_PROJ
#undef ZP_PROJ
#undef TP_PROJ
#undef XM_PROJ
#undef YM_PROJ
#undef ZM_PROJ
#undef TM_PROJ
#undef XP_RECON
#undef XP_RECON_ACCUM
#undef XM_RECON
#undef XM_RECON_ACCUM
#undef YP_RECON_ACCUM
#undef YM_RECON_ACCUM
#undef ZP_RECON_ACCUM
#undef ZM_RECON_ACCUM
#undef TP_RECON_ACCUM
#undef TM_RECON_ACCUM
#undef ZERO_RESULT
#undef Chimu_00
#undef Chimu_01
#undef Chimu_02
#undef Chimu_10
#undef Chimu_11
#undef Chimu_12
#undef Chimu_20
#undef Chimu_21
#undef Chimu_22
#undef Chimu_30
#undef Chimu_31
#undef Chimu_32
#undef HAND_STENCIL_LEG
#undef HAND_STENCIL_LEG_INT
#undef HAND_STENCIL_LEG_EXT
#undef HAND_RESULT
#undef HAND_RESULT_INT
#undef HAND_RESULT_EXT
#define REGISTER
#define LOAD_CHIMU \
{const SiteSpinor & ref (in[offset]); \
Chimu_00=ref()(0)(0);\
Chimu_01=ref()(0)(1);\
Chimu_02=ref()(0)(2);\
Chimu_10=ref()(1)(0);\
Chimu_11=ref()(1)(1);\
Chimu_12=ref()(1)(2);\
Chimu_20=ref()(2)(0);\
Chimu_21=ref()(2)(1);\
Chimu_22=ref()(2)(2);\
Chimu_30=ref()(3)(0);\
Chimu_31=ref()(3)(1);\
Chimu_32=ref()(3)(2);\
std::cout << std::endl << "DEBUG -- LOAD_CHIMU" << std::endl; \
std::cout << "Chimu_00 -- " << Chimu_00 << std::endl; \
std::cout << "Chimu_01 -- " << Chimu_01 << std::endl; \
std::cout << "Chimu_02 -- " << Chimu_02 << std::endl; \
std::cout << "Chimu_10 -- " << Chimu_10 << std::endl; \
std::cout << "Chimu_11 -- " << Chimu_11 << std::endl; \
std::cout << "Chimu_12 -- " << Chimu_12 << std::endl; \
std::cout << "Chimu_20 -- " << Chimu_20 << std::endl; \
std::cout << "Chimu_21 -- " << Chimu_21 << std::endl; \
std::cout << "Chimu_22 -- " << Chimu_22 << std::endl; \
std::cout << "Chimu_30 -- " << Chimu_30 << std::endl; \
std::cout << "Chimu_31 -- " << Chimu_31 << std::endl; \
std::cout << "Chimu_32 -- " << Chimu_32 << std::endl; \
}
#define LOAD_CHI\
{const SiteHalfSpinor &ref(buf[offset]); \
Chi_00 = ref()(0)(0);\
Chi_01 = ref()(0)(1);\
Chi_02 = ref()(0)(2);\
Chi_10 = ref()(1)(0);\
Chi_11 = ref()(1)(1);\
Chi_12 = ref()(1)(2);\
std::cout << std::endl << "DEBUG -- LOAD_CHI" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl; \
}
// To splat or not to splat depends on the implementation
#define MULT_2SPIN(A)\
{auto & ref(U[sU](A)); \
Impl::loadLinkElement(U_00,ref()(0,0)); \
Impl::loadLinkElement(U_10,ref()(1,0)); \
Impl::loadLinkElement(U_20,ref()(2,0)); \
Impl::loadLinkElement(U_01,ref()(0,1)); \
Impl::loadLinkElement(U_11,ref()(1,1)); \
Impl::loadLinkElement(U_21,ref()(2,1)); \
UChi_00 = U_00*Chi_00;\
UChi_10 = U_00*Chi_10;\
UChi_01 = U_10*Chi_00;\
UChi_11 = U_10*Chi_10;\
UChi_02 = U_20*Chi_00;\
UChi_12 = U_20*Chi_10;\
UChi_00+= U_01*Chi_01;\
UChi_10+= U_01*Chi_11;\
UChi_01+= U_11*Chi_01;\
UChi_11+= U_11*Chi_11;\
UChi_02+= U_21*Chi_01;\
UChi_12+= U_21*Chi_11;\
Impl::loadLinkElement(U_00,ref()(0,2)); \
Impl::loadLinkElement(U_10,ref()(1,2)); \
Impl::loadLinkElement(U_20,ref()(2,2)); \
UChi_00+= U_00*Chi_02;\
UChi_10+= U_00*Chi_12;\
UChi_01+= U_10*Chi_02;\
UChi_11+= U_10*Chi_12;\
UChi_02+= U_20*Chi_02;\
UChi_12+= U_20*Chi_12;\
std::cout << std::endl << "DEBUG -- MULT_2SPIN" << std::endl; \
std::cout << "UChi_00 -- " << UChi_00 << std::endl; \
std::cout << "UChi_01 -- " << UChi_01 << std::endl; \
std::cout << "UChi_02 -- " << UChi_02 << std::endl; \
std::cout << "UChi_10 -- " << UChi_10 << std::endl; \
std::cout << "UChi_11 -- " << UChi_11 << std::endl; \
std::cout << "UChi_12 -- " << UChi_12 << std::endl; \
}
#define PERMUTE_DIR(dir) \
std::cout << std::endl << "DEBUG -- PERM PRE" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl; \
permute##dir(Chi_00,Chi_00);\
permute##dir(Chi_01,Chi_01);\
permute##dir(Chi_02,Chi_02);\
permute##dir(Chi_10,Chi_10);\
permute##dir(Chi_11,Chi_11);\
permute##dir(Chi_12,Chi_12);\
std::cout << std::endl << "DEBUG -- PERM POST" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
// hspin(0)=fspin(0)+timesI(fspin(3));
// hspin(1)=fspin(1)+timesI(fspin(2));
#define XP_PROJ \
Chi_00 = Chimu_00+timesI(Chimu_30);\
Chi_01 = Chimu_01+timesI(Chimu_31);\
Chi_02 = Chimu_02+timesI(Chimu_32);\
Chi_10 = Chimu_10+timesI(Chimu_20);\
Chi_11 = Chimu_11+timesI(Chimu_21);\
Chi_12 = Chimu_12+timesI(Chimu_22);\
std::cout << std::endl << "DEBUG -- XP_PROJ" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
#define YP_PROJ \
Chi_00 = Chimu_00-Chimu_30;\
Chi_01 = Chimu_01-Chimu_31;\
Chi_02 = Chimu_02-Chimu_32;\
Chi_10 = Chimu_10+Chimu_20;\
Chi_11 = Chimu_11+Chimu_21;\
Chi_12 = Chimu_12+Chimu_22;\
std::cout << std::endl << "DEBUG -- YP_PROJ" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
#define ZP_PROJ \
Chi_00 = Chimu_00+timesI(Chimu_20); \
Chi_01 = Chimu_01+timesI(Chimu_21); \
Chi_02 = Chimu_02+timesI(Chimu_22); \
Chi_10 = Chimu_10-timesI(Chimu_30); \
Chi_11 = Chimu_11-timesI(Chimu_31); \
Chi_12 = Chimu_12-timesI(Chimu_32);\
std::cout << std::endl << "DEBUG -- ZP_PROJ" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
#define TP_PROJ \
Chi_00 = Chimu_00+Chimu_20; \
Chi_01 = Chimu_01+Chimu_21; \
Chi_02 = Chimu_02+Chimu_22; \
Chi_10 = Chimu_10+Chimu_30; \
Chi_11 = Chimu_11+Chimu_31; \
Chi_12 = Chimu_12+Chimu_32;\
std::cout << std::endl << "DEBUG -- TP_PROJ" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
// hspin(0)=fspin(0)-timesI(fspin(3));
// hspin(1)=fspin(1)-timesI(fspin(2));
#define XM_PROJ \
Chi_00 = Chimu_00-timesI(Chimu_30);\
Chi_01 = Chimu_01-timesI(Chimu_31);\
Chi_02 = Chimu_02-timesI(Chimu_32);\
Chi_10 = Chimu_10-timesI(Chimu_20);\
Chi_11 = Chimu_11-timesI(Chimu_21);\
Chi_12 = Chimu_12-timesI(Chimu_22);\
std::cout << std::endl << "DEBUG -- XM_PROJ" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
#define YM_PROJ \
Chi_00 = Chimu_00+Chimu_30;\
Chi_01 = Chimu_01+Chimu_31;\
Chi_02 = Chimu_02+Chimu_32;\
Chi_10 = Chimu_10-Chimu_20;\
Chi_11 = Chimu_11-Chimu_21;\
Chi_12 = Chimu_12-Chimu_22;\
std::cout << std::endl << "DEBUG -- YM_PROJ" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
#define ZM_PROJ \
Chi_00 = Chimu_00-timesI(Chimu_20); \
Chi_01 = Chimu_01-timesI(Chimu_21); \
Chi_02 = Chimu_02-timesI(Chimu_22); \
Chi_10 = Chimu_10+timesI(Chimu_30); \
Chi_11 = Chimu_11+timesI(Chimu_31); \
Chi_12 = Chimu_12+timesI(Chimu_32);\
std::cout << std::endl << "DEBUG -- ZM_PROJ" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
#define TM_PROJ \
Chi_00 = Chimu_00-Chimu_20; \
Chi_01 = Chimu_01-Chimu_21; \
Chi_02 = Chimu_02-Chimu_22; \
Chi_10 = Chimu_10-Chimu_30; \
Chi_11 = Chimu_11-Chimu_31; \
Chi_12 = Chimu_12-Chimu_32;\
std::cout << std::endl << "DEBUG -- TM_PROJ" << std::endl; \
std::cout << "Chi_00 -- " << Chi_00 << std::endl; \
std::cout << "Chi_01 -- " << Chi_01 << std::endl; \
std::cout << "Chi_02 -- " << Chi_02 << std::endl; \
std::cout << "Chi_10 -- " << Chi_10 << std::endl; \
std::cout << "Chi_11 -- " << Chi_11 << std::endl; \
std::cout << "Chi_12 -- " << Chi_12 << std::endl;
// fspin(0)=hspin(0);
// fspin(1)=hspin(1);
// fspin(2)=timesMinusI(hspin(1));
// fspin(3)=timesMinusI(hspin(0));
#define XP_RECON\
result_00 = UChi_00;\
result_01 = UChi_01;\
result_02 = UChi_02;\
result_10 = UChi_10;\
result_11 = UChi_11;\
result_12 = UChi_12;\
result_20 = timesMinusI(UChi_10);\
result_21 = timesMinusI(UChi_11);\
result_22 = timesMinusI(UChi_12);\
result_30 = timesMinusI(UChi_00);\
result_31 = timesMinusI(UChi_01);\
result_32 = timesMinusI(UChi_02);\
std::cout << std::endl << "DEBUG -- XP_RECON" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define XP_RECON_ACCUM\
result_00+=UChi_00;\
result_01+=UChi_01;\
result_02+=UChi_02;\
result_10+=UChi_10;\
result_11+=UChi_11;\
result_12+=UChi_12;\
result_20-=timesI(UChi_10);\
result_21-=timesI(UChi_11);\
result_22-=timesI(UChi_12);\
result_30-=timesI(UChi_00);\
result_31-=timesI(UChi_01);\
result_32-=timesI(UChi_02);\
std::cout << std::endl << "DEBUG -- XP_RECON_ACCUM" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define XM_RECON\
result_00 = UChi_00;\
result_01 = UChi_01;\
result_02 = UChi_02;\
result_10 = UChi_10;\
result_11 = UChi_11;\
result_12 = UChi_12;\
result_20 = timesI(UChi_10);\
result_21 = timesI(UChi_11);\
result_22 = timesI(UChi_12);\
result_30 = timesI(UChi_00);\
result_31 = timesI(UChi_01);\
result_32 = timesI(UChi_02);\
std::cout << std::endl << "DEBUG -- XM_RECON" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define XM_RECON_ACCUM\
result_00+= UChi_00;\
result_01+= UChi_01;\
result_02+= UChi_02;\
result_10+= UChi_10;\
result_11+= UChi_11;\
result_12+= UChi_12;\
result_20+= timesI(UChi_10);\
result_21+= timesI(UChi_11);\
result_22+= timesI(UChi_12);\
result_30+= timesI(UChi_00);\
result_31+= timesI(UChi_01);\
result_32+= timesI(UChi_02);\
std::cout << std::endl << "DEBUG -- XM_RECON_ACCUM" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define YP_RECON_ACCUM\
result_00+= UChi_00;\
result_01+= UChi_01;\
result_02+= UChi_02;\
result_10+= UChi_10;\
result_11+= UChi_11;\
result_12+= UChi_12;\
result_20+= UChi_10;\
result_21+= UChi_11;\
result_22+= UChi_12;\
result_30-= UChi_00;\
result_31-= UChi_01;\
result_32-= UChi_02;\
std::cout << std::endl << "DEBUG -- YP_RECON_ACCUM" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define YM_RECON_ACCUM\
result_00+= UChi_00;\
result_01+= UChi_01;\
result_02+= UChi_02;\
result_10+= UChi_10;\
result_11+= UChi_11;\
result_12+= UChi_12;\
result_20-= UChi_10;\
result_21-= UChi_11;\
result_22-= UChi_12;\
result_30+= UChi_00;\
result_31+= UChi_01;\
result_32+= UChi_02;\
std::cout << std::endl << "DEBUG -- YM_RECON_ACCUM" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define ZP_RECON_ACCUM\
result_00+= UChi_00;\
result_01+= UChi_01;\
result_02+= UChi_02;\
result_10+= UChi_10;\
result_11+= UChi_11;\
result_12+= UChi_12;\
result_20-= timesI(UChi_00); \
result_21-= timesI(UChi_01); \
result_22-= timesI(UChi_02); \
result_30+= timesI(UChi_10); \
result_31+= timesI(UChi_11); \
result_32+= timesI(UChi_12);\
std::cout << std::endl << "DEBUG -- ZP_RECON_ACCUM" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define ZM_RECON_ACCUM\
result_00+= UChi_00;\
result_01+= UChi_01;\
result_02+= UChi_02;\
result_10+= UChi_10;\
result_11+= UChi_11;\
result_12+= UChi_12;\
result_20+= timesI(UChi_00); \
result_21+= timesI(UChi_01); \
result_22+= timesI(UChi_02); \
result_30-= timesI(UChi_10); \
result_31-= timesI(UChi_11); \
result_32-= timesI(UChi_12);\
std::cout << std::endl << "DEBUG -- ZM_RECON_ACCUM" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define TP_RECON_ACCUM\
result_00+= UChi_00;\
result_01+= UChi_01;\
result_02+= UChi_02;\
result_10+= UChi_10;\
result_11+= UChi_11;\
result_12+= UChi_12;\
result_20+= UChi_00; \
result_21+= UChi_01; \
result_22+= UChi_02; \
result_30+= UChi_10; \
result_31+= UChi_11; \
result_32+= UChi_12;\
std::cout << std::endl << "DEBUG -- TP_RECON_ACCUM" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define TM_RECON_ACCUM\
result_00+= UChi_00;\
result_01+= UChi_01;\
result_02+= UChi_02;\
result_10+= UChi_10;\
result_11+= UChi_11;\
result_12+= UChi_12;\
result_20-= UChi_00; \
result_21-= UChi_01; \
result_22-= UChi_02; \
result_30-= UChi_10; \
result_31-= UChi_11; \
result_32-= UChi_12;\
std::cout << std::endl << "DEBUG -- TM_RECON_ACCUM" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;
#define HAND_STENCIL_LEG(PROJ,PERM,DIR,RECON) \
SE=st.GetEntry(ptype,DIR,ss); \
offset = SE->_offset; \
local = SE->_is_local; \
perm = SE->_permute; \
if ( local ) { \
LOAD_CHIMU; \
PROJ; \
if ( perm) { \
PERMUTE_DIR(PERM); \
} \
} else { \
LOAD_CHI; \
} \
MULT_2SPIN(DIR); \
RECON;
#define HAND_STENCIL_LEG_INT(PROJ,PERM,DIR,RECON) \
SE=st.GetEntry(ptype,DIR,ss); \
offset = SE->_offset; \
local = SE->_is_local; \
perm = SE->_permute; \
if ( local ) { \
LOAD_CHIMU; \
PROJ; \
if ( perm) { \
PERMUTE_DIR(PERM); \
} \
} else if ( st.same_node[DIR] ) { \
LOAD_CHI; \
} \
if (local || st.same_node[DIR] ) { \
MULT_2SPIN(DIR); \
RECON; \
}
#define HAND_STENCIL_LEG_EXT(PROJ,PERM,DIR,RECON) \
SE=st.GetEntry(ptype,DIR,ss); \
offset = SE->_offset; \
if((!SE->_is_local)&&(!st.same_node[DIR]) ) { \
LOAD_CHI; \
MULT_2SPIN(DIR); \
RECON; \
nmu++; \
}
#define HAND_RESULT(ss) \
{ \
SiteSpinor & ref (out[ss]); \
vstream(ref()(0)(0),result_00); \
vstream(ref()(0)(1),result_01); \
vstream(ref()(0)(2),result_02); \
vstream(ref()(1)(0),result_10); \
vstream(ref()(1)(1),result_11); \
vstream(ref()(1)(2),result_12); \
vstream(ref()(2)(0),result_20); \
vstream(ref()(2)(1),result_21); \
vstream(ref()(2)(2),result_22); \
vstream(ref()(3)(0),result_30); \
vstream(ref()(3)(1),result_31); \
vstream(ref()(3)(2),result_32); \
std::cout << std::endl << "DEBUG -- RESULT" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;\
}
#define HAND_RESULT_EXT(ss) \
if (nmu){ \
SiteSpinor & ref (out[ss]); \
ref()(0)(0)+=result_00; \
ref()(0)(1)+=result_01; \
ref()(0)(2)+=result_02; \
ref()(1)(0)+=result_10; \
ref()(1)(1)+=result_11; \
ref()(1)(2)+=result_12; \
ref()(2)(0)+=result_20; \
ref()(2)(1)+=result_21; \
ref()(2)(2)+=result_22; \
ref()(3)(0)+=result_30; \
ref()(3)(1)+=result_31; \
ref()(3)(2)+=result_32; \
std::cout << std::endl << "DEBUG -- RESULT EXT" << std::endl; \
std::cout << "result_00 -- " << result_00 << std::endl; \
std::cout << "result_01 -- " << result_01 << std::endl; \
std::cout << "result_02 -- " << result_02 << std::endl; \
std::cout << "result_10 -- " << result_10 << std::endl; \
std::cout << "result_11 -- " << result_11 << std::endl; \
std::cout << "result_12 -- " << result_12 << std::endl; \
std::cout << "result_20 -- " << result_20 << std::endl; \
std::cout << "result_21 -- " << result_21 << std::endl; \
std::cout << "result_22 -- " << result_22 << std::endl; \
std::cout << "result_30 -- " << result_30 << std::endl; \
std::cout << "result_31 -- " << result_31 << std::endl; \
std::cout << "result_32 -- " << result_32 << std::endl;\
}
#define HAND_DECLARATIONS(a) \
Simd result_00; \
Simd result_01; \
Simd result_02; \
Simd result_10; \
Simd result_11; \
Simd result_12; \
Simd result_20; \
Simd result_21; \
Simd result_22; \
Simd result_30; \
Simd result_31; \
Simd result_32; \
Simd Chi_00; \
Simd Chi_01; \
Simd Chi_02; \
Simd Chi_10; \
Simd Chi_11; \
Simd Chi_12; \
Simd UChi_00; \
Simd UChi_01; \
Simd UChi_02; \
Simd UChi_10; \
Simd UChi_11; \
Simd UChi_12; \
Simd U_00; \
Simd U_10; \
Simd U_20; \
Simd U_01; \
Simd U_11; \
Simd U_21;\
Simd debugreg;\
svbool_t pg1; \
pg1 = svptrue_b64(); \
#define ZERO_RESULT \
result_00=Zero(); \
result_01=Zero(); \
result_02=Zero(); \
result_10=Zero(); \
result_11=Zero(); \
result_12=Zero(); \
result_20=Zero(); \
result_21=Zero(); \
result_22=Zero(); \
result_30=Zero(); \
result_31=Zero(); \
result_32=Zero();
#define Chimu_00 Chi_00
#define Chimu_01 Chi_01
#define Chimu_02 Chi_02
#define Chimu_10 Chi_10
#define Chimu_11 Chi_11
#define Chimu_12 Chi_12
#define Chimu_20 UChi_00
#define Chimu_21 UChi_01
#define Chimu_22 UChi_02
#define Chimu_30 UChi_10
#define Chimu_31 UChi_11
#define Chimu_32 UChi_12
NAMESPACE_BEGIN(Grid);
template<class Impl> void
WilsonKernels<Impl>::HandDhopSite(StencilView &st, DoubledGaugeFieldView &U,SiteHalfSpinor *buf,
int ss,int sU,const FermionFieldView &in, FermionFieldView &out)
{
// T==0, Z==1, Y==2, Z==3 expect 1,2,2,2 simd layout etc...
typedef typename Simd::scalar_type S;
typedef typename Simd::vector_type V;
HAND_DECLARATIONS(ignore);
int offset,local,perm, ptype;
StencilEntry *SE;
HAND_STENCIL_LEG(XM_PROJ,3,Xp,XM_RECON);
HAND_STENCIL_LEG(YM_PROJ,2,Yp,YM_RECON_ACCUM);
HAND_STENCIL_LEG(ZM_PROJ,1,Zp,ZM_RECON_ACCUM);
HAND_STENCIL_LEG(TM_PROJ,0,Tp,TM_RECON_ACCUM);
HAND_STENCIL_LEG(XP_PROJ,3,Xm,XP_RECON_ACCUM);
HAND_STENCIL_LEG(YP_PROJ,2,Ym,YP_RECON_ACCUM);
HAND_STENCIL_LEG(ZP_PROJ,1,Zm,ZP_RECON_ACCUM);
HAND_STENCIL_LEG(TP_PROJ,0,Tm,TP_RECON_ACCUM);
HAND_RESULT(ss);
}
template<class Impl>
void WilsonKernels<Impl>::HandDhopSiteDag(StencilView &st,DoubledGaugeFieldView &U,SiteHalfSpinor *buf,
int ss,int sU,const FermionFieldView &in, FermionFieldView &out)
{
typedef typename Simd::scalar_type S;
typedef typename Simd::vector_type V;
HAND_DECLARATIONS(ignore);
StencilEntry *SE;
int offset,local,perm, ptype;
HAND_STENCIL_LEG(XP_PROJ,3,Xp,XP_RECON);
HAND_STENCIL_LEG(YP_PROJ,2,Yp,YP_RECON_ACCUM);
HAND_STENCIL_LEG(ZP_PROJ,1,Zp,ZP_RECON_ACCUM);
HAND_STENCIL_LEG(TP_PROJ,0,Tp,TP_RECON_ACCUM);
HAND_STENCIL_LEG(XM_PROJ,3,Xm,XM_RECON_ACCUM);
HAND_STENCIL_LEG(YM_PROJ,2,Ym,YM_RECON_ACCUM);
HAND_STENCIL_LEG(ZM_PROJ,1,Zm,ZM_RECON_ACCUM);
HAND_STENCIL_LEG(TM_PROJ,0,Tm,TM_RECON_ACCUM);
HAND_RESULT(ss);
}
template<class Impl> void
WilsonKernels<Impl>::HandDhopSiteInt(StencilView &st,DoubledGaugeFieldView &U,SiteHalfSpinor *buf,
int ss,int sU,const FermionFieldView &in, FermionFieldView &out)
{
// T==0, Z==1, Y==2, Z==3 expect 1,2,2,2 simd layout etc...
typedef typename Simd::scalar_type S;
typedef typename Simd::vector_type V;
HAND_DECLARATIONS(ignore);
int offset,local,perm, ptype;
StencilEntry *SE;
ZERO_RESULT;
HAND_STENCIL_LEG_INT(XM_PROJ,3,Xp,XM_RECON_ACCUM);
HAND_STENCIL_LEG_INT(YM_PROJ,2,Yp,YM_RECON_ACCUM);
HAND_STENCIL_LEG_INT(ZM_PROJ,1,Zp,ZM_RECON_ACCUM);
HAND_STENCIL_LEG_INT(TM_PROJ,0,Tp,TM_RECON_ACCUM);
HAND_STENCIL_LEG_INT(XP_PROJ,3,Xm,XP_RECON_ACCUM);
HAND_STENCIL_LEG_INT(YP_PROJ,2,Ym,YP_RECON_ACCUM);
HAND_STENCIL_LEG_INT(ZP_PROJ,1,Zm,ZP_RECON_ACCUM);
HAND_STENCIL_LEG_INT(TP_PROJ,0,Tm,TP_RECON_ACCUM);
HAND_RESULT(ss);
}
template<class Impl>
void WilsonKernels<Impl>::HandDhopSiteDagInt(StencilView &st,DoubledGaugeFieldView &U,SiteHalfSpinor *buf,
int ss,int sU,const FermionFieldView &in, FermionFieldView &out)
{
typedef typename Simd::scalar_type S;
typedef typename Simd::vector_type V;
HAND_DECLARATIONS(ignore);
StencilEntry *SE;
int offset,local,perm, ptype;
ZERO_RESULT;
HAND_STENCIL_LEG_INT(XP_PROJ,3,Xp,XP_RECON_ACCUM);
HAND_STENCIL_LEG_INT(YP_PROJ,2,Yp,YP_RECON_ACCUM);
HAND_STENCIL_LEG_INT(ZP_PROJ,1,Zp,ZP_RECON_ACCUM);
HAND_STENCIL_LEG_INT(TP_PROJ,0,Tp,TP_RECON_ACCUM);
HAND_STENCIL_LEG_INT(XM_PROJ,3,Xm,XM_RECON_ACCUM);
HAND_STENCIL_LEG_INT(YM_PROJ,2,Ym,YM_RECON_ACCUM);
HAND_STENCIL_LEG_INT(ZM_PROJ,1,Zm,ZM_RECON_ACCUM);
HAND_STENCIL_LEG_INT(TM_PROJ,0,Tm,TM_RECON_ACCUM);
HAND_RESULT(ss);
}
template<class Impl> void
WilsonKernels<Impl>::HandDhopSiteExt(StencilView &st,DoubledGaugeFieldView &U,SiteHalfSpinor *buf,
int ss,int sU,const FermionFieldView &in, FermionFieldView &out)
{
// T==0, Z==1, Y==2, Z==3 expect 1,2,2,2 simd layout etc...
typedef typename Simd::scalar_type S;
typedef typename Simd::vector_type V;
HAND_DECLARATIONS(ignore);
int offset, ptype;
StencilEntry *SE;
int nmu=0;
ZERO_RESULT;
HAND_STENCIL_LEG_EXT(XM_PROJ,3,Xp,XM_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(YM_PROJ,2,Yp,YM_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(ZM_PROJ,1,Zp,ZM_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(TM_PROJ,0,Tp,TM_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(XP_PROJ,3,Xm,XP_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(YP_PROJ,2,Ym,YP_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(ZP_PROJ,1,Zm,ZP_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(TP_PROJ,0,Tm,TP_RECON_ACCUM);
HAND_RESULT_EXT(ss);
}
template<class Impl>
void WilsonKernels<Impl>::HandDhopSiteDagExt(StencilView &st,DoubledGaugeFieldView &U,SiteHalfSpinor *buf,
int ss,int sU,const FermionFieldView &in, FermionFieldView &out)
{
typedef typename Simd::scalar_type S;
typedef typename Simd::vector_type V;
HAND_DECLARATIONS(ignore);
StencilEntry *SE;
int offset, ptype;
int nmu=0;
ZERO_RESULT;
HAND_STENCIL_LEG_EXT(XP_PROJ,3,Xp,XP_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(YP_PROJ,2,Yp,YP_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(ZP_PROJ,1,Zp,ZP_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(TP_PROJ,0,Tp,TP_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(XM_PROJ,3,Xm,XM_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(YM_PROJ,2,Ym,YM_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(ZM_PROJ,1,Zm,ZM_RECON_ACCUM);
HAND_STENCIL_LEG_EXT(TM_PROJ,0,Tm,TM_RECON_ACCUM);
HAND_RESULT_EXT(ss);
}
////////////// Wilson ; uses this implementation /////////////////////
NAMESPACE_END(Grid);
#undef LOAD_CHIMU
#undef LOAD_CHI
#undef MULT_2SPIN
#undef PERMUTE_DIR
#undef XP_PROJ
#undef YP_PROJ
#undef ZP_PROJ
#undef TP_PROJ
#undef XM_PROJ
#undef YM_PROJ
#undef ZM_PROJ
#undef TM_PROJ
#undef XP_RECON
#undef XP_RECON_ACCUM
#undef XM_RECON
#undef XM_RECON_ACCUM
#undef YP_RECON_ACCUM
#undef YM_RECON_ACCUM
#undef ZP_RECON_ACCUM
#undef ZM_RECON_ACCUM
#undef TP_RECON_ACCUM
#undef TM_RECON_ACCUM
#undef ZERO_RESULT
#undef Chimu_00
#undef Chimu_01
#undef Chimu_02
#undef Chimu_10
#undef Chimu_11
#undef Chimu_12
#undef Chimu_20
#undef Chimu_21
#undef Chimu_22
#undef Chimu_30
#undef Chimu_31
#undef Chimu_32
#undef HAND_STENCIL_LEG
#undef HAND_STENCIL_LEG_INT
#undef HAND_STENCIL_LEG_EXT
#undef HAND_RESULT
#undef HAND_RESULT_INT
#undef HAND_RESULT_EXT

View File

@ -501,4 +501,3 @@ void WilsonKernels<Impl>::DhopKernel(int Opt,StencilImpl &st, DoubledGaugeField
#undef ASM_CALL #undef ASM_CALL
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -4,11 +4,12 @@ Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015 Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk> Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local> Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk> Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by it under the terms of the GNU General Public License as published by
@ -34,9 +35,13 @@ directory
#ifndef AVX512 #ifndef AVX512
#ifndef QPX #ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h> #include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif #endif
#endif #endif
#endif
#endif
NAMESPACE_BEGIN(Grid); NAMESPACE_BEGIN(Grid);
@ -44,4 +49,3 @@ NAMESPACE_BEGIN(Grid);
template class WilsonKernels<IMPLEMENTATION>; template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -37,6 +37,7 @@ directory
//////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////
NAMESPACE_BEGIN(Grid); NAMESPACE_BEGIN(Grid);
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmAvx512.h> #include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmAvx512.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmA64FX.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmQPX.h> #include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmQPX.h>
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -1 +0,0 @@
../WilsonKernelsInstantiation.cc.master

View File

@ -0,0 +1,51 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: ./lib/qcd/action/fermion/WilsonKernels.cc
Copyright (C) 2015, 2020
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
Author: Peter Boyle <peterboyle@Peters-MacBook-Pro-2.local>
Author: paboyle <paboyle@ph.ed.ac.uk>
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution
directory
*************************************************************************************/
/* END LEGAL */
#include <Grid/qcd/action/fermion/FermionCore.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsImplementation.h>
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsHandImplementation.h>
#ifndef AVX512
#ifndef QPX
#ifndef A64FX
#ifndef A64FXFIXEDSIZE
#include <Grid/qcd/action/fermion/implementation/WilsonKernelsAsmImplementation.h>
#endif
#endif
#endif
#endif
NAMESPACE_BEGIN(Grid);
#include "impl.h"
template class WilsonKernels<IMPLEMENTATION>;
NAMESPACE_END(Grid);

View File

@ -59,7 +59,7 @@ public:
} }
static inline GaugeLinkField static inline GaugeLinkField
CovShiftIdentityBackward(const GaugeLinkField &Link, int mu) { CovShiftIdentityBackward(const GaugeLinkField &Link, int mu) {
return Cshift(closure(adj(Link)), mu, -1); return Cshift(adj(Link), mu, -1);
} }
static inline GaugeLinkField static inline GaugeLinkField
CovShiftIdentityForward(const GaugeLinkField &Link, int mu) { CovShiftIdentityForward(const GaugeLinkField &Link, int mu) {

View File

@ -140,17 +140,7 @@ private:
// Can move this outside? // Can move this outside?
typedef IntegratorType<SmearingPolicy> TheIntegrator; typedef IntegratorType<SmearingPolicy> TheIntegrator;
// Metric TheIntegrator MDynamics(UGrid, Parameters.MD, TheAction, Smearing);
//TrivialMetric<typename Implementation::Field> Mtr;
ConjugateGradient<LatticeGaugeField> CG(1.0e-8,10000);
LaplacianParams LapPar(0.0001, 1.0, 10000, 1e-8, 12, 64);
// RealD Kappa = 1.2;
RealD Kappa = Parameters.Kappa;
std::cout << GridLogMessage << "Kappa = " << Kappa << std::endl;
// Better to pass the generalised momenta to the integrator
LaplacianAdjointField<PeriodicGimplR> Laplacian(UGrid, CG, LapPar, Kappa);
TheIntegrator MDynamics(UGrid, Parameters.MD, TheAction, Smearing, Laplacian);
if (Parameters.StartingType == "HotStart") { if (Parameters.StartingType == "HotStart") {
// Hot start // Hot start

View File

@ -53,7 +53,6 @@ struct HMCparameters: Serializable {
bool, MetropolisTest, bool, MetropolisTest,
Integer, NoMetropolisUntil, Integer, NoMetropolisUntil,
std::string, StartingType, std::string, StartingType,
RealD, Kappa,
IntegratorParameters, MD) IntegratorParameters, MD)
HMCparameters() { HMCparameters() {

View File

@ -73,8 +73,7 @@ protected:
double t_U; // Track time passing on each level and for U and for P double t_U; // Track time passing on each level and for U and for P
std::vector<double> t_P; std::vector<double> t_P;
// MomentaField P; MomentaField P;
GeneralisedMomenta<FieldImplementation > P;
SmearingPolicy& Smearer; SmearingPolicy& Smearer;
RepresentationPolicy Representations; RepresentationPolicy Representations;
IntegratorParameters Params; IntegratorParameters Params;
@ -84,7 +83,7 @@ protected:
void update_P(Field& U, int level, double ep) void update_P(Field& U, int level, double ep)
{ {
t_P[level] += ep; t_P[level] += ep;
update_P(P.Mom, U, level, ep); update_P(P, U, level, ep);
std::cout << GridLogIntegrator << "[" << level << "] P " << " dt " << ep << " : t_P " << t_P[level] << std::endl; std::cout << GridLogIntegrator << "[" << level << "] P " << " dt " << ep << " : t_P " << t_P[level] << std::endl;
} }
@ -112,21 +111,6 @@ protected:
// input U actually not used in the fundamental case // input U actually not used in the fundamental case
// Fundamental updates, include smearing // Fundamental updates, include smearing
// Generalised momenta
// Derivative of the kinetic term must be computed before
// Mom is the momenta and gets updated by the
// actions derivatives
MomentaField MomDer(P.Mom.Grid());
P.M.ImportGauge(U);
P.DerivativeU(P.Mom, MomDer);
Mom -= MomDer * ep;
// Auxiliary fields
P.update_auxiliary_momenta(ep*0.5);
P.AuxiliaryFieldsDerivative(MomDer);
Mom -= MomDer * ep;
P.update_auxiliary_momenta(ep*0.5);
for (int a = 0; a < as[level].actions.size(); ++a) { for (int a = 0; a < as[level].actions.size(); ++a) {
double start_full = usecond(); double start_full = usecond();
Field force(U.Grid()); Field force(U.Grid());
@ -153,83 +137,9 @@ protected:
as[level].apply(update_P_hireps, Representations, Mom, U, ep); as[level].apply(update_P_hireps, Representations, Mom, U, ep);
} }
void implicit_update_P(Field& U, int level, double ep, bool intermediate = false) {
t_P[level] += ep;
std::cout << GridLogIntegrator << "[" << level << "] P "
<< " dt " << ep << " : t_P " << t_P[level] << std::endl;
// Fundamental updates, include smearing
MomentaField Msum(P.Mom.Grid());
Msum = Zero();
for (int a = 0; a < as[level].actions.size(); ++a) {
// Compute the force terms for the lagrangian part
// We need to compute the derivative of the actions
// only once
Field force(U.Grid());
conformable(U.Grid(), P.Mom.Grid());
Field& Us = Smearer.get_U(as[level].actions.at(a)->is_smeared);
as[level].actions.at(a)->deriv(Us, force); // deriv should NOT include Ta
std::cout << GridLogIntegrator << "Smearing (on/off): " << as[level].actions.at(a)->is_smeared << std::endl;
if (as[level].actions.at(a)->is_smeared) Smearer.smeared_force(force);
force = FieldImplementation::projectForce(force); // Ta for gauge fields
Real force_abs = std::sqrt(norm2(force) / U.Grid()->gSites());
std::cout << GridLogIntegrator << "|Force| site average: " << force_abs
<< std::endl;
Msum += force;
}
MomentaField NewMom = P.Mom;
MomentaField OldMom = P.Mom;
double threshold = 1e-8;
P.M.ImportGauge(U);
MomentaField MomDer(P.Mom.Grid());
MomentaField MomDer1(P.Mom.Grid());
MomentaField AuxDer(P.Mom.Grid());
MomDer1 = Zero();
MomentaField diff(P.Mom.Grid());
double factor = 2.0;
if (intermediate){
P.DerivativeU(P.Mom, MomDer1);
factor = 1.0;
}
// Auxiliary fields
P.update_auxiliary_momenta(ep*0.5);
P.AuxiliaryFieldsDerivative(AuxDer);
Msum += AuxDer;
// Here run recursively
int counter = 1;
RealD RelativeError;
do {
std::cout << GridLogIntegrator << "UpdateP implicit step "<< counter << std::endl;
// Compute the derivative of the kinetic term
// with respect to the gauge field
P.DerivativeU(NewMom, MomDer);
Real force_abs = std::sqrt(norm2(MomDer) / U.Grid()->gSites());
std::cout << GridLogIntegrator << "|Force| laplacian site average: " << force_abs
<< std::endl;
NewMom = P.Mom - ep* 0.5 * (2.0*Msum + factor*MomDer + MomDer1);// simplify
diff = NewMom - OldMom;
counter++;
RelativeError = std::sqrt(norm2(diff))/std::sqrt(norm2(NewMom));
std::cout << GridLogIntegrator << "UpdateP RelativeError: " << RelativeError << std::endl;
OldMom = NewMom;
} while (RelativeError > threshold);
P.Mom = NewMom;
// update the auxiliary fields momenta
P.update_auxiliary_momenta(ep*0.5);
}
void update_U(Field& U, double ep) void update_U(Field& U, double ep)
{ {
update_U(P.Mom, U, ep); update_U(P, U, ep);
t_U += ep; t_U += ep;
int fl = levels - 1; int fl = levels - 1;
@ -248,64 +158,15 @@ protected:
Representations.update(U); // void functions if fundamental representation Representations.update(U); // void functions if fundamental representation
} }
void implicit_update_U(Field&U, double ep){
t_U += ep;
int fl = levels - 1;
std::cout << GridLogIntegrator << " " << "[" << fl << "] U " << " dt " << ep << " : t_U " << t_U << std::endl;
MomentaField Mom1(P.Mom.Grid());
MomentaField Mom2(P.Mom.Grid());
RealD RelativeError;
Field diff(U.Grid());
Real threshold = 1e-8;
int counter = 1;
int MaxCounter = 100;
Field OldU = U;
Field NewU = U;
P.M.ImportGauge(U);
P.DerivativeP(Mom1); // first term in the derivative
P.update_auxiliary_fields(ep*0.5);
MomentaField sum=Mom1;
do {
std::cout << GridLogIntegrator << "UpdateU implicit step "<< counter << std::endl;
P.DerivativeP(Mom2); // second term in the derivative, on the updated U
sum = (Mom1 + Mom2);
for (int mu = 0; mu < Nd; mu++) {
auto Umu = PeekIndex<LorentzIndex>(U, mu);
auto Pmu = PeekIndex<LorentzIndex>(sum, mu);
Umu = expMat(Pmu, ep * 0.5, 12) * Umu;
PokeIndex<LorentzIndex>(NewU, ProjectOnGroup(Umu), mu);
}
diff = NewU - OldU;
RelativeError = std::sqrt(norm2(diff))/std::sqrt(norm2(NewU));
std::cout << GridLogIntegrator << "UpdateU RelativeError: " << RelativeError << std::endl;
P.M.ImportGauge(NewU);
OldU = NewU; // some redundancy to be eliminated
counter++;
} while (RelativeError > threshold && counter < MaxCounter);
U = NewU;
P.update_auxiliary_fields(ep*0.5);
}
virtual void step(Field& U, int level, int first, int last) = 0; virtual void step(Field& U, int level, int first, int last) = 0;
public: public:
Integrator(GridBase* grid, IntegratorParameters Par, Integrator(GridBase* grid, IntegratorParameters Par,
ActionSet<Field, RepresentationPolicy>& Aset, ActionSet<Field, RepresentationPolicy>& Aset,
SmearingPolicy& Sm, Metric<MomentaField>& M) SmearingPolicy& Sm)
: Params(Par), : Params(Par),
as(Aset), as(Aset),
P(grid, M), P(grid),
levels(Aset.size()), levels(Aset.size()),
Smearer(Sm), Smearer(Sm),
Representations(grid) Representations(grid)
@ -342,9 +203,7 @@ public:
void reverse_momenta() void reverse_momenta()
{ {
// P *= -1.0; P *= -1.0;
P.Mom *= -1.0;
P.AuxMom *= -1.0;
} }
// to be used by the actionlevel class to iterate // to be used by the actionlevel class to iterate
@ -364,13 +223,10 @@ public:
// Initialization of momenta and actions // Initialization of momenta and actions
void refresh(Field& U, GridParallelRNG& pRNG) void refresh(Field& U, GridParallelRNG& pRNG)
{ {
assert(P.Mom.Grid() == U.Grid()); assert(P.Grid() == U.Grid());
std::cout << GridLogIntegrator << "Integrator refresh\n"; std::cout << GridLogIntegrator << "Integrator refresh\n";
// FieldImplementation::generate_momenta(P.Mom, pRNG); FieldImplementation::generate_momenta(P, pRNG);
P.M.ImportGauge(U);
P.MomentaDistribution(pRNG);
// Update the smeared fields, can be implemented as observer // Update the smeared fields, can be implemented as observer
// necessary to keep the fields updated even after a reject // necessary to keep the fields updated even after a reject
@ -416,11 +272,9 @@ public:
std::cout << GridLogIntegrator << "Integrator action\n"; std::cout << GridLogIntegrator << "Integrator action\n";
// RealD H = - FieldImplementation::FieldSquareNorm(P)/HMC_MOMENTUM_DENOMINATOR; // - trace (P*P)/denom RealD H = - FieldImplementation::FieldSquareNorm(P)/HMC_MOMENTUM_DENOMINATOR; // - trace (P*P)/denom
P.M.ImportGauge(U);
RealD H = - P.MomentaAction();
RealD Hterm; RealD Hterm;
std::cout << GridLogMessage << "Momentum action H_p = " << H << "\n";
// Actions // Actions
for (int level = 0; level < as.size(); ++level) { for (int level = 0; level < as.size(); ++level) {
@ -447,9 +301,9 @@ public:
t_P[level] = 0; t_P[level] = 0;
} }
for (int step = 0; step < Params.MDsteps; ++step) { // MD step for (int stp = 0; stp < Params.MDsteps; ++stp) { // MD step
int first_step = (step == 0); int first_step = (stp == 0);
int last_step = (step == Params.MDsteps - 1); int last_step = (stp == Params.MDsteps - 1);
this->step(U, 0, first_step, last_step); this->step(U, 0, first_step, last_step);
} }

View File

@ -101,8 +101,8 @@ public:
std::string integrator_name(){return "LeapFrog";} std::string integrator_name(){return "LeapFrog";}
LeapFrog(GridBase* grid, IntegratorParameters Par, ActionSet<Field, RepresentationPolicy>& Aset, SmearingPolicy& Sm, Metric<Field>& M) LeapFrog(GridBase* grid, IntegratorParameters Par, ActionSet<Field, RepresentationPolicy>& Aset, SmearingPolicy& Sm)
: Integrator<FieldImplementation, SmearingPolicy, RepresentationPolicy>(grid, Par, Aset, Sm,M){}; : Integrator<FieldImplementation, SmearingPolicy, RepresentationPolicy>(grid, Par, Aset, Sm){};
void step(Field& U, int level, int _first, int _last) { void step(Field& U, int level, int _first, int _last) {
int fl = this->as.size() - 1; int fl = this->as.size() - 1;
@ -144,8 +144,8 @@ private:
public: public:
INHERIT_FIELD_TYPES(FieldImplementation); INHERIT_FIELD_TYPES(FieldImplementation);
MinimumNorm2(GridBase* grid, IntegratorParameters Par, ActionSet<Field, RepresentationPolicy>& Aset, SmearingPolicy& Sm, Metric<Field>& M) MinimumNorm2(GridBase* grid, IntegratorParameters Par, ActionSet<Field, RepresentationPolicy>& Aset, SmearingPolicy& Sm)
: Integrator<FieldImplementation, SmearingPolicy, RepresentationPolicy>(grid, Par, Aset, Sm,M){}; : Integrator<FieldImplementation, SmearingPolicy, RepresentationPolicy>(grid, Par, Aset, Sm){};
std::string integrator_name(){return "MininumNorm2";} std::string integrator_name(){return "MininumNorm2";}
@ -207,9 +207,9 @@ public:
// Looks like dH scales as dt^4. tested wilson/wilson 2 level. // Looks like dH scales as dt^4. tested wilson/wilson 2 level.
ForceGradient(GridBase* grid, IntegratorParameters Par, ForceGradient(GridBase* grid, IntegratorParameters Par,
ActionSet<Field, RepresentationPolicy>& Aset, ActionSet<Field, RepresentationPolicy>& Aset,
SmearingPolicy& Sm, Metric<Field>& M) SmearingPolicy& Sm)
: Integrator<FieldImplementation, SmearingPolicy, RepresentationPolicy>( : Integrator<FieldImplementation, SmearingPolicy, RepresentationPolicy>(
grid, Par, Aset, Sm,M){}; grid, Par, Aset, Sm){};
std::string integrator_name(){return "ForceGradient";} std::string integrator_name(){return "ForceGradient";}
@ -271,139 +271,6 @@ public:
} }
}; };
////////////////////////////////
// Riemannian Manifold HMC
// Girolami et al
////////////////////////////////
// correct
template <class FieldImplementation, class SmearingPolicy,
class RepresentationPolicy =
Representations<FundamentalRepresentation> >
class ImplicitLeapFrog : public Integrator<FieldImplementation, SmearingPolicy,
RepresentationPolicy> {
public:
typedef ImplicitLeapFrog<FieldImplementation, SmearingPolicy, RepresentationPolicy>
Algorithm;
INHERIT_FIELD_TYPES(FieldImplementation);
// Riemannian manifold metric operator
// Hermitian operator Fisher
std::string integrator_name(){return "ImplicitLeapFrog";}
ImplicitLeapFrog(GridBase* grid, IntegratorParameters Par,
ActionSet<Field, RepresentationPolicy>& Aset, SmearingPolicy& Sm, Metric<Field>& M)
: Integrator<FieldImplementation, SmearingPolicy, RepresentationPolicy>(
grid, Par, Aset, Sm, M){};
void step(Field& U, int level, int _first, int _last) {
int fl = this->as.size() - 1;
// level : current level
// fl : final level
// eps : current step size
// Get current level step size
RealD eps = this->Params.trajL/this->Params.MDsteps;
for (int l = 0; l <= level; ++l) eps /= this->as[l].multiplier;
int multiplier = this->as[level].multiplier;
for (int e = 0; e < multiplier; ++e) {
int first_step = _first && (e == 0);
int last_step = _last && (e == multiplier - 1);
if (first_step) { // initial half step
this->implicit_update_P(U, level, eps / 2.0);
}
if (level == fl) { // lowest level
this->implicit_update_U(U, eps);
} else { // recursive function call
this->step(U, level + 1, first_step, last_step);
}
//int mm = last_step ? 1 : 2;
if (last_step){
this->update_P(U, level, eps / 2.0);
} else {
this->implicit_update_P(U, level, eps, true);// works intermediate step
// this->update_P(U, level, eps); // looks not reversible
}
}
}
};
// This is not completely tested
template <class FieldImplementation, class SmearingPolicy,
class RepresentationPolicy =
Representations<FundamentalRepresentation> >
class ImplicitMinimumNorm2 : public Integrator<FieldImplementation, SmearingPolicy,
RepresentationPolicy> {
private:
const RealD lambda = 0.1931833275037836;
public:
INHERIT_FIELD_TYPES(FieldImplementation);
ImplicitMinimumNorm2(GridBase* grid, IntegratorParameters Par,
ActionSet<Field, RepresentationPolicy>& Aset, SmearingPolicy& Sm, Metric<Field>& M)
: Integrator<FieldImplementation, SmearingPolicy, RepresentationPolicy>(
grid, Par, Aset, Sm, M){};
std::string integrator_name(){return "ImplicitMininumNorm2";}
void step(Field& U, int level, int _first, int _last) {
// level : current level
// fl : final level
// eps : current step size
int fl = this->as.size() - 1;
RealD eps = this->Params.trajL/this->Params.MDsteps * 2.0;
for (int l = 0; l <= level; ++l) eps /= 2.0 * this->as[l].multiplier;
// Nesting: 2xupdate_U of size eps/2
// Next level is eps/2/multiplier
int multiplier = this->as[level].multiplier;
for (int e = 0; e < multiplier; ++e) { // steps per step
int first_step = _first && (e == 0);
int last_step = _last && (e == multiplier - 1);
if (first_step) { // initial half step
this->implicit_update_P(U, level, lambda * eps);
}
if (level == fl) { // lowest level
this->implicit_update_U(U, 0.5 * eps);
} else { // recursive function call
this->step(U, level + 1, first_step, 0);
}
this->implicit_update_P(U, level, (1.0 - 2.0 * lambda) * eps, true);
if (level == fl) { // lowest level
this->implicit_update_U(U, 0.5 * eps);
} else { // recursive function call
this->step(U, level + 1, 0, last_step);
}
//int mm = (last_step) ? 1 : 2;
//this->update_P(U, level, lambda * eps * mm);
if (last_step) {
this->update_P(U, level, eps * lambda);
} else {
this->implicit_update_P(U, level, lambda * eps*2.0, true);
}
}
}
};
NAMESPACE_END(Grid); NAMESPACE_END(Grid);
#endif // INTEGRATOR_INCLUDED #endif // INTEGRATOR_INCLUDED

View File

@ -128,7 +128,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProjTm (iVector<vtype,Nhs> &hspin,const iVector<vtype,Ns> &fspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProjTm (iVector<vtype,Nhs> &hspin,const iVector<vtype,Ns> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
hspin(0)=fspin(0)-fspin(2); hspin(0)=fspin(0)-fspin(2);
hspin(1)=fspin(1)-fspin(3); hspin(1)=fspin(1)-fspin(3);
} }
@ -138,40 +137,50 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
* 0 0 -1 0 * 0 0 -1 0
* 0 0 0 -1 * 0 0 0 -1
*/ */
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProj5p (iVector<vtype,Nhs> &hspin,const iVector<vtype,Ns> &fspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProj5p (iVector<vtype,Nhs> &hspin,const iVector<vtype,Ns> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
hspin(0)=fspin(0); hspin(0)=fspin(0);
hspin(1)=fspin(1); hspin(1)=fspin(1);
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProj5m (iVector<vtype,Nhs> &hspin,const iVector<vtype,Ns> &fspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProj5m (iVector<vtype,Nhs> &hspin,const iVector<vtype,Ns> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
hspin(0)=fspin(2); hspin(0)=fspin(2);
hspin(1)=fspin(3); hspin(1)=fspin(3);
} }
// template<class vtype> accelerator_inline void fspProj5p (iVector<vtype,Ns> &rfspin,const iVector<vtype,Ns> &fspin)
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProj5p (iVector<vtype,Ns> &rfspin,const iVector<vtype,Ns> &fspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProj5p (iVector<vtype,Ns> &rfspin,const iVector<vtype,Ns> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
rfspin(0)=fspin(0); rfspin(0)=fspin(0);
rfspin(1)=fspin(1); rfspin(1)=fspin(1);
rfspin(2)=Zero(); rfspin(2)=Zero();
rfspin(3)=Zero(); rfspin(3)=Zero();
} }
// template<class vtype> accelerator_inline void fspProj5m (iVector<vtype,Ns> &rfspin,const iVector<vtype,Ns> &fspin)
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProj5m (iVector<vtype,Ns> &rfspin,const iVector<vtype,Ns> &fspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spProj5m (iVector<vtype,Ns> &rfspin,const iVector<vtype,Ns> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
rfspin(0)=Zero(); rfspin(0)=Zero();
rfspin(1)=Zero(); rfspin(1)=Zero();
rfspin(2)=fspin(2); rfspin(2)=fspin(2);
rfspin(3)=fspin(3); rfspin(3)=fspin(3);
} }
template<class vtype,int N,IfCoarsened<iVector<vtype,N> > = 0> accelerator_inline void spProj5p (iVector<vtype,N> &rfspin,const iVector<vtype,N> &fspin)
{
const int hN = N>>1;
for(int s=0;s<hN;s++){
rfspin(s)=fspin(s);
rfspin(s+hN)=Zero();
}
}
template<class vtype,int N,IfCoarsened<iVector<vtype,N> > = 0> accelerator_inline void spProj5m (iVector<vtype,N> &rfspin,const iVector<vtype,N> &fspin)
{
const int hN = N>>1;
for(int s=0;s<hN;s++){
rfspin(s)=Zero();
rfspin(s+hN)=fspin(s+hN);
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Reconstruction routines to move back again to four spin // Reconstruction routines to move back again to four spin
//////////////////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////////////////////
@ -183,7 +192,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
*/ */
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconXp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconXp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0); fspin(0)=hspin(0);
fspin(1)=hspin(1); fspin(1)=hspin(1);
fspin(2)=timesMinusI(hspin(1)); fspin(2)=timesMinusI(hspin(1));
@ -191,7 +199,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconXm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconXm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0); fspin(0)=hspin(0);
fspin(1)=hspin(1); fspin(1)=hspin(1);
fspin(2)=timesI(hspin(1)); fspin(2)=timesI(hspin(1));
@ -199,7 +206,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconXp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconXp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0); fspin(0)+=hspin(0);
fspin(1)+=hspin(1); fspin(1)+=hspin(1);
fspin(2)-=timesI(hspin(1)); fspin(2)-=timesI(hspin(1));
@ -207,7 +213,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconXm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconXm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0); fspin(0)+=hspin(0);
fspin(1)+=hspin(1); fspin(1)+=hspin(1);
fspin(2)+=timesI(hspin(1)); fspin(2)+=timesI(hspin(1));
@ -221,7 +226,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconYp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconYp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0); fspin(0)=hspin(0);
fspin(1)=hspin(1); fspin(1)=hspin(1);
fspin(2)= hspin(1); fspin(2)= hspin(1);
@ -229,7 +233,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconYm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconYm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0); fspin(0)=hspin(0);
fspin(1)=hspin(1); fspin(1)=hspin(1);
fspin(2)=-hspin(1); fspin(2)=-hspin(1);
@ -237,7 +240,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconYp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconYp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0); fspin(0)+=hspin(0);
fspin(1)+=hspin(1); fspin(1)+=hspin(1);
fspin(2)+=hspin(1); fspin(2)+=hspin(1);
@ -245,7 +247,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconYm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconYm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0); fspin(0)+=hspin(0);
fspin(1)+=hspin(1); fspin(1)+=hspin(1);
fspin(2)-=hspin(1); fspin(2)-=hspin(1);
@ -260,7 +261,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
*/ */
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconZp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconZp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0); fspin(0)=hspin(0);
fspin(1)=hspin(1); fspin(1)=hspin(1);
fspin(2)=timesMinusI(hspin(0)); fspin(2)=timesMinusI(hspin(0));
@ -268,7 +268,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconZm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconZm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0); fspin(0)=hspin(0);
fspin(1)=hspin(1); fspin(1)=hspin(1);
fspin(2)= timesI(hspin(0)); fspin(2)= timesI(hspin(0));
@ -276,7 +275,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconZp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconZp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0); fspin(0)+=hspin(0);
fspin(1)+=hspin(1); fspin(1)+=hspin(1);
fspin(2)-=timesI(hspin(0)); fspin(2)-=timesI(hspin(0));
@ -284,7 +282,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconZm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconZm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0); fspin(0)+=hspin(0);
fspin(1)+=hspin(1); fspin(1)+=hspin(1);
fspin(2)+=timesI(hspin(0)); fspin(2)+=timesI(hspin(0));
@ -298,7 +295,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
*/ */
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconTp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconTp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0); fspin(0)=hspin(0);
fspin(1)=hspin(1); fspin(1)=hspin(1);
fspin(2)=hspin(0); fspin(2)=hspin(0);
@ -306,7 +302,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconTm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spReconTm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0); fspin(0)=hspin(0);
fspin(1)=hspin(1); fspin(1)=hspin(1);
fspin(2)=-hspin(0); fspin(2)=-hspin(0);
@ -314,7 +309,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconTp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconTp (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0); fspin(0)+=hspin(0);
fspin(1)+=hspin(1); fspin(1)+=hspin(1);
fspin(2)+=hspin(0); fspin(2)+=hspin(0);
@ -322,7 +316,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconTm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumReconTm (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0); fspin(0)+=hspin(0);
fspin(1)+=hspin(1); fspin(1)+=hspin(1);
fspin(2)-=hspin(0); fspin(2)-=hspin(0);
@ -336,7 +329,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
*/ */
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spRecon5p (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spRecon5p (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=hspin(0)+hspin(0); // add is lower latency than mul fspin(0)=hspin(0)+hspin(0); // add is lower latency than mul
fspin(1)=hspin(1)+hspin(1); // probably no measurable diffence though fspin(1)=hspin(1)+hspin(1); // probably no measurable diffence though
fspin(2)=Zero(); fspin(2)=Zero();
@ -344,7 +336,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spRecon5m (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void spRecon5m (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)=Zero(); fspin(0)=Zero();
fspin(1)=Zero(); fspin(1)=Zero();
fspin(2)=hspin(0)+hspin(0); fspin(2)=hspin(0)+hspin(0);
@ -352,7 +343,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void s
} }
template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumRecon5p (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin) template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void accumRecon5p (iVector<vtype,Ns> &fspin,const iVector<vtype,Nhs> &hspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,Ns>,SpinorIndex>::value,iVector<vtype,Ns> >::type *SFINAE;
fspin(0)+=hspin(0)+hspin(0); fspin(0)+=hspin(0)+hspin(0);
fspin(1)+=hspin(1)+hspin(1); fspin(1)+=hspin(1)+hspin(1);
} }
@ -372,7 +362,6 @@ template<class vtype,IfSpinor<iVector<vtype,Ns> > = 0> accelerator_inline void a
////////// //////////
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjXp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjXp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProjXp(hspin._internal[i],fspin._internal[i]); spProjXp(hspin._internal[i],fspin._internal[i]);
} }
@ -426,26 +415,21 @@ template<class rtype,class vtype,int N> accelerator_inline void accumReconXp (iM
}} }}
} }
//////// ////////
// Xm // Xm
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProjXm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spProjXm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProjXm(hspin._internal,fspin._internal); spProjXm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjXm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjXm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProjXm(hspin._internal[i],fspin._internal[i]); spProjXm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProjXm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spProjXm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProjXm(hspin._internal[i][j],fspin._internal[i][j]); spProjXm(hspin._internal[i][j],fspin._internal[i][j]);
@ -455,19 +439,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spProjXm (iMatri
template<class rtype,class vtype> accelerator_inline void spReconXm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spReconXm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spReconXm(hspin._internal,fspin._internal); spReconXm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconXm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconXm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spReconXm(hspin._internal[i],fspin._internal[i]); spReconXm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spReconXm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spReconXm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spReconXm(hspin._internal[i][j],fspin._internal[i][j]); spReconXm(hspin._internal[i][j],fspin._internal[i][j]);
@ -476,45 +457,37 @@ template<class rtype,class vtype,int N> accelerator_inline void spReconXm (iMatr
template<class rtype,class vtype> accelerator_inline void accumReconXm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumReconXm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumReconXm(hspin._internal,fspin._internal); accumReconXm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconXm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconXm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumReconXm(hspin._internal[i],fspin._internal[i]); accumReconXm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumReconXm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumReconXm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumReconXm(hspin._internal[i][j],fspin._internal[i][j]); accumReconXm(hspin._internal[i][j],fspin._internal[i][j]);
}} }}
} }
//////// ////////
// Yp // Yp
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProjYp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spProjYp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProjYp(hspin._internal,fspin._internal); spProjYp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjYp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjYp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProjYp(hspin._internal[i],fspin._internal[i]); spProjYp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProjYp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spProjYp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProjYp(hspin._internal[i][j],fspin._internal[i][j]); spProjYp(hspin._internal[i][j],fspin._internal[i][j]);
@ -524,19 +497,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spProjYp (iMatri
template<class rtype,class vtype> accelerator_inline void spReconYp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spReconYp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spReconYp(hspin._internal,fspin._internal); spReconYp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconYp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconYp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spReconYp(hspin._internal[i],fspin._internal[i]); spReconYp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spReconYp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spReconYp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spReconYp(hspin._internal[i][j],fspin._internal[i][j]); spReconYp(hspin._internal[i][j],fspin._internal[i][j]);
@ -545,66 +515,55 @@ template<class rtype,class vtype,int N> accelerator_inline void spReconYp (iMatr
template<class rtype,class vtype> accelerator_inline void accumReconYp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumReconYp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumReconYp(hspin._internal,fspin._internal); accumReconYp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconYp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconYp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumReconYp(hspin._internal[i],fspin._internal[i]); accumReconYp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumReconYp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumReconYp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumReconYp(hspin._internal[i][j],fspin._internal[i][j]); accumReconYp(hspin._internal[i][j],fspin._internal[i][j]);
}} }}
} }
//////// ////////
// Ym // Ym
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProjYm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spProjYm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProjYm(hspin._internal,fspin._internal); spProjYm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjYm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjYm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProjYm(hspin._internal[i],fspin._internal[i]); spProjYm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProjYm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spProjYm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProjYm(hspin._internal[i][j],fspin._internal[i][j]); spProjYm(hspin._internal[i][j],fspin._internal[i][j]);
}} }}
} }
template<class rtype,class vtype> accelerator_inline void spReconYm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spReconYm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spReconYm(hspin._internal,fspin._internal); spReconYm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconYm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconYm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,const iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spReconYm(hspin._internal[i],fspin._internal[i]); spReconYm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spReconYm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spReconYm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spReconYm(hspin._internal[i][j],fspin._internal[i][j]); spReconYm(hspin._internal[i][j],fspin._internal[i][j]);
@ -613,19 +572,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spReconYm (iMatr
template<class rtype,class vtype> accelerator_inline void accumReconYm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumReconYm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumReconYm(hspin._internal,fspin._internal); accumReconYm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconYm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconYm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumReconYm(hspin._internal[i],fspin._internal[i]); accumReconYm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumReconYm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumReconYm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumReconYm(hspin._internal[i][j],fspin._internal[i][j]); accumReconYm(hspin._internal[i][j],fspin._internal[i][j]);
@ -638,19 +594,16 @@ template<class rtype,class vtype,int N> accelerator_inline void accumReconYm (iM
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProjZp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spProjZp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProjZp(hspin._internal,fspin._internal); spProjZp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjZp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjZp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProjZp(hspin._internal[i],fspin._internal[i]); spProjZp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProjZp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spProjZp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProjZp(hspin._internal[i][j],fspin._internal[i][j]); spProjZp(hspin._internal[i][j],fspin._internal[i][j]);
@ -660,19 +613,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spProjZp (iMatri
template<class rtype,class vtype> accelerator_inline void spReconZp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spReconZp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spReconZp(hspin._internal,fspin._internal); spReconZp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconZp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconZp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spReconZp(hspin._internal[i],fspin._internal[i]); spReconZp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spReconZp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spReconZp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spReconZp(hspin._internal[i][j],fspin._internal[i][j]); spReconZp(hspin._internal[i][j],fspin._internal[i][j]);
@ -681,19 +631,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spReconZp (iMatr
template<class rtype,class vtype> accelerator_inline void accumReconZp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumReconZp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumReconZp(hspin._internal,fspin._internal); accumReconZp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconZp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconZp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumReconZp(hspin._internal[i],fspin._internal[i]); accumReconZp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumReconZp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumReconZp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumReconZp(hspin._internal[i][j],fspin._internal[i][j]); accumReconZp(hspin._internal[i][j],fspin._internal[i][j]);
@ -706,19 +653,16 @@ template<class rtype,class vtype,int N> accelerator_inline void accumReconZp (iM
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProjZm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spProjZm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProjZm(hspin._internal,fspin._internal); spProjZm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjZm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjZm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProjZm(hspin._internal[i],fspin._internal[i]); spProjZm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProjZm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spProjZm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProjZm(hspin._internal[i][j],fspin._internal[i][j]); spProjZm(hspin._internal[i][j],fspin._internal[i][j]);
@ -728,19 +672,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spProjZm (iMatri
template<class rtype,class vtype> accelerator_inline void spReconZm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spReconZm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spReconZm(hspin._internal,fspin._internal); spReconZm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconZm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconZm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spReconZm(hspin._internal[i],fspin._internal[i]); spReconZm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spReconZm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spReconZm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spReconZm(hspin._internal[i][j],fspin._internal[i][j]); spReconZm(hspin._internal[i][j],fspin._internal[i][j]);
@ -749,19 +690,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spReconZm (iMatr
template<class rtype,class vtype> accelerator_inline void accumReconZm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumReconZm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumReconZm(hspin._internal,fspin._internal); accumReconZm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconZm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconZm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumReconZm(hspin._internal[i],fspin._internal[i]); accumReconZm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumReconZm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumReconZm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumReconZm(hspin._internal[i][j],fspin._internal[i][j]); accumReconZm(hspin._internal[i][j],fspin._internal[i][j]);
@ -774,19 +712,16 @@ template<class rtype,class vtype,int N> accelerator_inline void accumReconZm (iM
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProjTp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spProjTp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProjTp(hspin._internal,fspin._internal); spProjTp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjTp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjTp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProjTp(hspin._internal[i],fspin._internal[i]); spProjTp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProjTp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spProjTp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProjTp(hspin._internal[i][j],fspin._internal[i][j]); spProjTp(hspin._internal[i][j],fspin._internal[i][j]);
@ -796,19 +731,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spProjTp (iMatri
template<class rtype,class vtype> accelerator_inline void spReconTp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spReconTp (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spReconTp(hspin._internal,fspin._internal); spReconTp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconTp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconTp (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spReconTp(hspin._internal[i],fspin._internal[i]); spReconTp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spReconTp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spReconTp (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spReconTp(hspin._internal[i][j],fspin._internal[i][j]); spReconTp(hspin._internal[i][j],fspin._internal[i][j]);
@ -817,44 +749,37 @@ template<class rtype,class vtype,int N> accelerator_inline void spReconTp (iMatr
template<class rtype,class vtype> accelerator_inline void accumReconTp (iScalar<rtype> &hspin, iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumReconTp (iScalar<rtype> &hspin, iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumReconTp(hspin._internal,fspin._internal); accumReconTp(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconTp (iVector<rtype,N> &hspin, const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconTp (iVector<rtype,N> &hspin, const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumReconTp(hspin._internal[i],fspin._internal[i]); accumReconTp(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumReconTp (iMatrix<rtype,N> &hspin, const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumReconTp (iMatrix<rtype,N> &hspin, const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumReconTp(hspin._internal[i][j],fspin._internal[i][j]); accumReconTp(hspin._internal[i][j],fspin._internal[i][j]);
}} }}
} }
//////// ////////
// Tm // Tm
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProjTm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spProjTm (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProjTm(hspin._internal,fspin._internal); spProjTm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjTm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProjTm (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProjTm(hspin._internal[i],fspin._internal[i]); spProjTm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProjTm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spProjTm (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProjTm(hspin._internal[i][j],fspin._internal[i][j]); spProjTm(hspin._internal[i][j],fspin._internal[i][j]);
@ -864,19 +789,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spProjTm (iMatri
template<class rtype,class vtype> accelerator_inline void spReconTm (iScalar<rtype> &hspin, const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spReconTm (iScalar<rtype> &hspin, const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spReconTm(hspin._internal,fspin._internal); spReconTm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconTm (iVector<rtype,N> &hspin, const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spReconTm (iVector<rtype,N> &hspin, const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spReconTm(hspin._internal[i],fspin._internal[i]); spReconTm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spReconTm (iMatrix<rtype,N> &hspin, const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spReconTm (iMatrix<rtype,N> &hspin, const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spReconTm(hspin._internal[i][j],fspin._internal[i][j]); spReconTm(hspin._internal[i][j],fspin._internal[i][j]);
@ -885,44 +807,37 @@ template<class rtype,class vtype,int N> accelerator_inline void spReconTm (iMatr
template<class rtype,class vtype> accelerator_inline void accumReconTm (iScalar<rtype> &hspin, const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumReconTm (iScalar<rtype> &hspin, const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumReconTm(hspin._internal,fspin._internal); accumReconTm(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconTm (iVector<rtype,N> &hspin, const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumReconTm (iVector<rtype,N> &hspin, const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumReconTm(hspin._internal[i],fspin._internal[i]); accumReconTm(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumReconTm (iMatrix<rtype,N> &hspin, const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumReconTm (iMatrix<rtype,N> &hspin, const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumReconTm(hspin._internal[i][j],fspin._internal[i][j]); accumReconTm(hspin._internal[i][j],fspin._internal[i][j]);
}} }}
} }
//////// ////////
// 5p // 5p
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProj5p (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5p (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProj5p(hspin._internal,fspin._internal); spProj5p(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProj5p (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProj5p (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProj5p(hspin._internal[i],fspin._internal[i]); spProj5p(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProj5p (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5p (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProj5p(hspin._internal[i][j],fspin._internal[i][j]); spProj5p(hspin._internal[i][j],fspin._internal[i][j]);
@ -931,19 +846,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spProj5p (iMatri
template<class rtype,class vtype> accelerator_inline void spRecon5p (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spRecon5p (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spRecon5p(hspin._internal,fspin._internal); spRecon5p(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spRecon5p (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spRecon5p (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spRecon5p(hspin._internal[i],fspin._internal[i]); spRecon5p(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spRecon5p (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spRecon5p (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spRecon5p(hspin._internal[i][j],fspin._internal[i][j]); spRecon5p(hspin._internal[i][j],fspin._internal[i][j]);
@ -952,19 +864,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spRecon5p (iMatr
template<class rtype,class vtype> accelerator_inline void accumRecon5p (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumRecon5p (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumRecon5p(hspin._internal,fspin._internal); accumRecon5p(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumRecon5p (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumRecon5p (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumRecon5p(hspin._internal[i],fspin._internal[i]); accumRecon5p(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumRecon5p (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumRecon5p (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumRecon5p(hspin._internal[i][j],fspin._internal[i][j]); accumRecon5p(hspin._internal[i][j],fspin._internal[i][j]);
@ -972,24 +881,18 @@ template<class rtype,class vtype,int N> accelerator_inline void accumRecon5p (iM
} }
// four spinor projectors for chiral proj // four spinor projectors for chiral proj
// template<class vtype> accelerator_inline void fspProj5p (iScalar<vtype> &hspin,const iScalar<vtype> &fspin) template<class vtype,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5p (iScalar<vtype> &hspin,const iScalar<vtype> &fspin)
template<class vtype> accelerator_inline void spProj5p (iScalar<vtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProj5p(hspin._internal,fspin._internal); spProj5p(hspin._internal,fspin._internal);
} }
// template<class vtype,int N> accelerator_inline void fspProj5p (iVector<vtype,N> &hspin,iVector<vtype,N> &fspin) template<class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5p (iVector<vtype,N> &hspin,const iVector<vtype,N> &fspin)
template<class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProj5p (iVector<vtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProj5p(hspin._internal[i],fspin._internal[i]); spProj5p(hspin._internal[i],fspin._internal[i]);
} }
} }
// template<class vtype,int N> accelerator_inline void fspProj5p (iMatrix<vtype,N> &hspin,iMatrix<vtype,N> &fspin) template<class vtype,int N,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5p (iMatrix<vtype,N> &hspin,const iMatrix<vtype,N> &fspin)
template<class vtype,int N> accelerator_inline void spProj5p (iMatrix<vtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProj5p(hspin._internal[i][j],fspin._internal[i][j]); spProj5p(hspin._internal[i][j],fspin._internal[i][j]);
@ -1001,17 +904,17 @@ template<class vtype,int N> accelerator_inline void spProj5p (iMatrix<vtype,N> &
// 5m // 5m
//////// ////////
template<class rtype,class vtype> accelerator_inline void spProj5m (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5m (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
spProj5m(hspin._internal,fspin._internal); spProj5m(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<rtype,N> > = 0> accelerator_inline void spProj5m (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<rtype,N> > = 0,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5m (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProj5m(hspin._internal[i],fspin._internal[i]); spProj5m(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spProj5m (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5m (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
@ -1021,19 +924,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spProj5m (iMatri
template<class rtype,class vtype> accelerator_inline void spRecon5m (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void spRecon5m (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spRecon5m(hspin._internal,fspin._internal); spRecon5m(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spRecon5m (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spRecon5m (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spRecon5m(hspin._internal[i],fspin._internal[i]); spRecon5m(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void spRecon5m (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void spRecon5m (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spRecon5m(hspin._internal[i][j],fspin._internal[i][j]); spRecon5m(hspin._internal[i][j],fspin._internal[i][j]);
@ -1042,19 +942,16 @@ template<class rtype,class vtype,int N> accelerator_inline void spRecon5m (iMatr
template<class rtype,class vtype> accelerator_inline void accumRecon5m (iScalar<rtype> &hspin,const iScalar<vtype> &fspin) template<class rtype,class vtype> accelerator_inline void accumRecon5m (iScalar<rtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
accumRecon5m(hspin._internal,fspin._internal); accumRecon5m(hspin._internal,fspin._internal);
} }
template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumRecon5m (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin) template<class rtype,class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void accumRecon5m (iVector<rtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
accumRecon5m(hspin._internal[i],fspin._internal[i]); accumRecon5m(hspin._internal[i],fspin._internal[i]);
} }
} }
template<class rtype,class vtype,int N> accelerator_inline void accumRecon5m (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin) template<class rtype,class vtype,int N> accelerator_inline void accumRecon5m (iMatrix<rtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
accumRecon5m(hspin._internal[i][j],fspin._internal[i][j]); accumRecon5m(hspin._internal[i][j],fspin._internal[i][j]);
@ -1063,24 +960,18 @@ template<class rtype,class vtype,int N> accelerator_inline void accumRecon5m (iM
// four spinor projectors for chiral proj // four spinor projectors for chiral proj
// template<class vtype> accelerator_inline void fspProj5m (iScalar<vtype> &hspin,const iScalar<vtype> &fspin) template<class vtype,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5m (iScalar<vtype> &hspin,const iScalar<vtype> &fspin)
template<class vtype> accelerator_inline void spProj5m (iScalar<vtype> &hspin,const iScalar<vtype> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iScalar<vtype>,SpinorIndex>::notvalue,iScalar<vtype> >::type *temp;
spProj5m(hspin._internal,fspin._internal); spProj5m(hspin._internal,fspin._internal);
} }
// template<class vtype,int N> accelerator_inline void fspProj5m (iVector<vtype,N> &hspin,iVector<vtype,N> &fspin) template<class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5m (iVector<vtype,N> &hspin,const iVector<vtype,N> &fspin)
template<class vtype,int N,IfNotSpinor<iVector<vtype,N> > = 0> accelerator_inline void spProj5m (iVector<vtype,N> &hspin,const iVector<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iVector<vtype,N>,SpinorIndex>::notvalue,iVector<vtype,N> >::type *temp;
for(int i=0;i<N;i++) { for(int i=0;i<N;i++) {
spProj5m(hspin._internal[i],fspin._internal[i]); spProj5m(hspin._internal[i],fspin._internal[i]);
} }
} }
// template<class vtype,int N> accelerator_inline void fspProj5m (iMatrix<vtype,N> &hspin,iMatrix<vtype,N> &fspin) template<class vtype,int N,IfNotCoarsened<iScalar<vtype> > = 0> accelerator_inline void spProj5m (iMatrix<vtype,N> &hspin,const iMatrix<vtype,N> &fspin)
template<class vtype,int N> accelerator_inline void spProj5m (iMatrix<vtype,N> &hspin,const iMatrix<vtype,N> &fspin)
{ {
//typename std::enable_if<matchGridTensorIndex<iMatrix<vtype,N>,SpinorIndex>::notvalue,iMatrix<vtype,N> >::type *temp;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
spProj5m(hspin._internal[i][j],fspin._internal[i][j]); spProj5m(hspin._internal[i][j],fspin._internal[i][j]);

View File

@ -53,23 +53,21 @@ namespace PeriodicBC {
return Cshift(tmp,mu,-1);// moves towards positive mu return Cshift(tmp,mu,-1);// moves towards positive mu
} }
template<class gauge,typename Op, typename T1> auto template<class gauge,class Expr,typename std::enable_if<is_lattice_expr<Expr>::value,void>::type * = nullptr>
CovShiftForward(const Lattice<gauge> &Link, auto CovShiftForward(const Lattice<gauge> &Link,
int mu, int mu,
const LatticeUnaryExpression<Op,T1> &expr) const Expr &expr) -> decltype(closure(expr))
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1)))>
{ {
Lattice<decltype(expr.op.func(eval(0, expr.arg1)))> arg(expr); auto arg = closure(expr);
return CovShiftForward(Link,mu,arg); return CovShiftForward(Link,mu,arg);
} }
template<class gauge,typename Op, typename T1> auto template<class gauge,class Expr,typename std::enable_if<is_lattice_expr<Expr>::value,void>::type * = nullptr>
CovShiftBackward(const Lattice<gauge> &Link, auto CovShiftBackward(const Lattice<gauge> &Link,
int mu, int mu,
const LatticeUnaryExpression<Op,T1> &expr) const Expr &expr) -> decltype(closure(expr))
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1)))>
{ {
Lattice<decltype(expr.op.func(eval(0, expr.arg1)))> arg(expr); auto arg = closure(expr);
return CovShiftForward(Link,mu,arg); return CovShiftBackward(Link,mu,arg);
} }
} }
@ -142,26 +140,23 @@ namespace ConjugateBC {
return Cshift(tmp,mu,-1);// moves towards positive mu return Cshift(tmp,mu,-1);// moves towards positive mu
} }
template<class gauge,typename Op, typename T1> auto template<class gauge,class Expr,typename std::enable_if<is_lattice_expr<Expr>::value,void>::type * = nullptr>
CovShiftForward(const Lattice<gauge> &Link, auto CovShiftForward(const Lattice<gauge> &Link,
int mu, int mu,
const LatticeUnaryExpression<Op,T1> &expr) const Expr &expr) -> decltype(closure(expr))
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1)))>
{ {
Lattice<decltype(expr.op.func(eval(0, expr.arg1)))> arg(expr); auto arg = closure(expr);
return CovShiftForward(Link,mu,arg); return CovShiftForward(Link,mu,arg);
} }
template<class gauge,typename Op, typename T1> auto template<class gauge,class Expr,typename std::enable_if<is_lattice_expr<Expr>::value,void>::type * = nullptr>
CovShiftBackward(const Lattice<gauge> &Link, auto CovShiftBackward(const Lattice<gauge> &Link,
int mu, int mu,
const LatticeUnaryExpression<Op,T1> &expr) const Expr &expr) -> decltype(closure(expr))
-> Lattice<decltype(expr.op.func(eval(0, expr.arg1)))>
{ {
Lattice<decltype(expr.op.func(eval(0, expr.arg1)))> arg(expr); auto arg = closure(expr);
return CovShiftForward(Link,mu,arg); return CovShiftBackward(Link,mu,arg);
} }
} }

View File

@ -188,7 +188,6 @@ void G5R5(Lattice<vobj> &z,const Lattice<vobj> &x)
z.Checkerboard() = x.Checkerboard(); z.Checkerboard() = x.Checkerboard();
conformable(x,z); conformable(x,z);
int Ls = grid->_rdimensions[0]; int Ls = grid->_rdimensions[0];
Gamma G5(Gamma::Algebra::Gamma5);
autoView( x_v, x, AcceleratorRead); autoView( x_v, x, AcceleratorRead);
autoView( z_v, z, AcceleratorWrite); autoView( z_v, z, AcceleratorWrite);
uint64_t nloop = grid->oSites()/Ls; uint64_t nloop = grid->oSites()/Ls;
@ -196,7 +195,13 @@ void G5R5(Lattice<vobj> &z,const Lattice<vobj> &x)
uint64_t ss = sss*Ls; uint64_t ss = sss*Ls;
for(int s=0;s<Ls;s++){ for(int s=0;s<Ls;s++){
int sp = Ls-1-s; int sp = Ls-1-s;
coalescedWrite(z_v[ss+sp],G5*x_v(ss+s)); auto tmp = x_v(ss+s);
decltype(tmp) tmp_p;
decltype(tmp) tmp_m;
spProj5p(tmp_p,tmp);
spProj5m(tmp_m,tmp);
// Use of spProj5m, 5p captures the coarse space too
coalescedWrite(z_v[ss+sp],tmp_p - tmp_m);
} }
}); });
} }
@ -208,10 +213,20 @@ void G5C(Lattice<vobj> &z, const Lattice<vobj> &x)
z.Checkerboard() = x.Checkerboard(); z.Checkerboard() = x.Checkerboard();
conformable(x, z); conformable(x, z);
Gamma G5(Gamma::Algebra::Gamma5); autoView( x_v, x, AcceleratorRead);
z = G5 * x; autoView( z_v, z, AcceleratorWrite);
uint64_t nloop = grid->oSites();
accelerator_for(ss,nloop,vobj::Nsimd(),{
auto tmp = x_v(ss);
decltype(tmp) tmp_p;
decltype(tmp) tmp_m;
spProj5p(tmp_p,tmp);
spProj5m(tmp_m,tmp);
coalescedWrite(z_v[ss],tmp_p - tmp_m);
});
} }
/*
template<class CComplex, int nbasis> template<class CComplex, int nbasis>
void G5C(Lattice<iVector<CComplex, nbasis>> &z, const Lattice<iVector<CComplex, nbasis>> &x) void G5C(Lattice<iVector<CComplex, nbasis>> &z, const Lattice<iVector<CComplex, nbasis>> &x)
{ {
@ -234,6 +249,7 @@ void G5C(Lattice<iVector<CComplex, nbasis>> &z, const Lattice<iVector<CComplex,
} }
}); });
} }
*/
NAMESPACE_END(Grid); NAMESPACE_END(Grid);

View File

@ -47,14 +47,9 @@ public:
typedef Lattice<vAMatrixF> LatticeAdjMatrixF; typedef Lattice<vAMatrixF> LatticeAdjMatrixF;
typedef Lattice<vAMatrixD> LatticeAdjMatrixD; typedef Lattice<vAMatrixD> LatticeAdjMatrixD;
typedef Lattice<iVector<iScalar<iMatrix<vComplex, Dimension> >, Nd> > typedef Lattice<iVector<iScalar<iMatrix<vComplex, Dimension> >, Nd> > LatticeAdjField;
LatticeAdjField; typedef Lattice<iVector<iScalar<iMatrix<vComplexF, Dimension> >, Nd> > LatticeAdjFieldF;
typedef Lattice<iVector<iScalar<iMatrix<vComplexF, Dimension> >, Nd> > typedef Lattice<iVector<iScalar<iMatrix<vComplexD, Dimension> >, Nd> > LatticeAdjFieldD;
LatticeAdjFieldF;
typedef Lattice<iVector<iScalar<iMatrix<vComplexD, Dimension> >, Nd> >
LatticeAdjFieldD;
template <class cplx> template <class cplx>
@ -128,7 +123,9 @@ public:
} }
// Projects the algebra components a lattice matrix (of dimension ncol*ncol -1 ) // Projects the algebra components a lattice matrix (of dimension ncol*ncol -1 )
static void projectOnAlgebra(typename SU<ncolour>::LatticeAlgebraVector &h_out, const LatticeAdjMatrix &in, Real scale = 1.0) { static void projectOnAlgebra(typename SU<ncolour>::LatticeAlgebraVector &h_out, const LatticeAdjMatrix &in, Real scale = 1.0)
{
conformable(h_out, in); conformable(h_out, in);
h_out = Zero(); h_out = Zero();
AMatrix iTa; AMatrix iTa;
@ -136,7 +133,7 @@ public:
for (int a = 0; a < Dimension; a++) { for (int a = 0; a < Dimension; a++) {
generator(a, iTa); generator(a, iTa);
auto tmp = real(trace(iTa * in)) * coefficient; LatticeComplex tmp = real(trace(iTa * in)) * coefficient;
pokeColour(h_out, tmp, a); pokeColour(h_out, tmp, a);
} }
} }

View File

@ -485,7 +485,7 @@ public:
// Up staple ___ ___ // Up staple ___ ___
// | | // | |
tmp = Cshift(closure(adj(U[nu])), nu, -1); tmp = Cshift(adj(U[nu]), nu, -1);
tmp = adj(U2[mu]) * tmp; tmp = adj(U2[mu]) * tmp;
tmp = Cshift(tmp, mu, -2); tmp = Cshift(tmp, mu, -2);
@ -519,7 +519,7 @@ public:
// //
// | | // | |
tmp = Cshift(closure(adj(U2[nu])), nu, -2); tmp = Cshift(adj(U2[nu]), nu, -2);
tmp = Gimpl::CovShiftBackward(U[mu], mu, tmp); tmp = Gimpl::CovShiftBackward(U[mu], mu, tmp);
tmp = U2[nu] * Cshift(tmp, nu, 2); tmp = U2[nu] * Cshift(tmp, nu, 2);
Stap += Cshift(tmp, mu, 1); Stap += Cshift(tmp, mu, 1);

View File

@ -0,0 +1,779 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: Fujitsu_A64FX_asm_double.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#define LOAD_CHIMU(base) LOAD_CHIMU_INTERLEAVED_A64FXd(base)
#define PREFETCH_CHIMU_L1(A) PREFETCH_CHIMU_L1_INTERNAL_A64FXd(A)
#define PREFETCH_GAUGE_L1(A) PREFETCH_GAUGE_L1_INTERNAL_A64FXd(A)
#define PREFETCH_CHIMU_L2(A) PREFETCH_CHIMU_L2_INTERNAL_A64FXd(A)
#define PREFETCH_GAUGE_L2(A) PREFETCH_GAUGE_L2_INTERNAL_A64FXd(A)
#define PF_GAUGE(A)
#define PREFETCH_RESULT_L2_STORE(A) PREFETCH_RESULT_L2_STORE_INTERNAL_A64FXd(A)
#define PREFETCH_RESULT_L1_STORE(A) PREFETCH_RESULT_L1_STORE_INTERNAL_A64FXd(A)
#define PREFETCH1_CHIMU(A) PREFETCH_CHIMU_L1(A)
#define PREFETCH_CHIMU(A) PREFETCH_CHIMU_L1(A)
#define LOCK_GAUGE(A)
#define UNLOCK_GAUGE(A)
#define MASK_REGS DECLARATIONS_A64FXd
#define SAVE_RESULT(A,B) RESULT_A64FXd(A); PREFETCH_RESULT_L2_STORE(B)
#define MULT_2SPIN_1(Dir) MULT_2SPIN_1_A64FXd(Dir)
#define MULT_2SPIN_2 MULT_2SPIN_2_A64FXd
#define LOAD_CHI(base) LOAD_CHI_A64FXd(base)
#define ADD_RESULT(base,basep) LOAD_CHIMU(base); ADD_RESULT_INTERNAL_A64FXd; RESULT_A64FXd(base)
#define XP_PROJ XP_PROJ_A64FXd
#define YP_PROJ YP_PROJ_A64FXd
#define ZP_PROJ ZP_PROJ_A64FXd
#define TP_PROJ TP_PROJ_A64FXd
#define XM_PROJ XM_PROJ_A64FXd
#define YM_PROJ YM_PROJ_A64FXd
#define ZM_PROJ ZM_PROJ_A64FXd
#define TM_PROJ TM_PROJ_A64FXd
#define XP_RECON XP_RECON_A64FXd
#define XM_RECON XM_RECON_A64FXd
#define XM_RECON_ACCUM XM_RECON_ACCUM_A64FXd
#define YM_RECON_ACCUM YM_RECON_ACCUM_A64FXd
#define ZM_RECON_ACCUM ZM_RECON_ACCUM_A64FXd
#define TM_RECON_ACCUM TM_RECON_ACCUM_A64FXd
#define XP_RECON_ACCUM XP_RECON_ACCUM_A64FXd
#define YP_RECON_ACCUM YP_RECON_ACCUM_A64FXd
#define ZP_RECON_ACCUM ZP_RECON_ACCUM_A64FXd
#define TP_RECON_ACCUM TP_RECON_ACCUM_A64FXd
#define PERMUTE_DIR0 0
#define PERMUTE_DIR1 1
#define PERMUTE_DIR2 2
#define PERMUTE_DIR3 3
#define PERMUTE PERMUTE_A64FXd;
#define LOAD_TABLE(Dir) if (Dir == 0) { LOAD_TABLE0; } else if (Dir == 1) { LOAD_TABLE1; } else if (Dir == 2) { LOAD_TABLE2; }
#define MAYBEPERM(Dir,perm) if (Dir != 3) { if (perm) { PERMUTE; } }
// DECLARATIONS
#define DECLARATIONS_A64FXd \
const uint64_t lut[4][8] = { \
{4, 5, 6, 7, 0, 1, 2, 3}, \
{2, 3, 0, 1, 6, 7, 4, 5}, \
{1, 0, 3, 2, 5, 4, 7, 6}, \
{0, 1, 2, 4, 5, 6, 7, 8} };\
asm ( \
"fmov z31.d , 0 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// RESULT
#define RESULT_A64FXd(base) \
{ \
asm ( \
"str z0, [%[storeptr], -6, mul vl] \n\t" \
"str z1, [%[storeptr], -5, mul vl] \n\t" \
"str z2, [%[storeptr], -4, mul vl] \n\t" \
"str z3, [%[storeptr], -3, mul vl] \n\t" \
"str z4, [%[storeptr], -2, mul vl] \n\t" \
"str z5, [%[storeptr], -1, mul vl] \n\t" \
"str z6, [%[storeptr], 0, mul vl] \n\t" \
"str z7, [%[storeptr], 1, mul vl] \n\t" \
"str z8, [%[storeptr], 2, mul vl] \n\t" \
"str z9, [%[storeptr], 3, mul vl] \n\t" \
"str z10, [%[storeptr], 4, mul vl] \n\t" \
"str z11, [%[storeptr], 5, mul vl] \n\t" \
: \
: [storeptr] "r" (base + 2 * 3 * 64) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_CHIMU_L2 (prefetch to L2)
#define PREFETCH_CHIMU_L2_INTERNAL_A64FXd(base) \
{ \
asm ( \
"prfd PLDL2STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_CHIMU_L1 (prefetch to L1)
#define PREFETCH_CHIMU_L1_INTERNAL_A64FXd(base) \
{ \
asm ( \
"prfd PLDL1STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PLDL1STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PLDL1STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_GAUGE_L2 (prefetch to L2)
#define PREFETCH_GAUGE_L2_INTERNAL_A64FXd(A) \
{ \
const auto & ref(U[sUn](A)); uint64_t baseU = (uint64_t)&ref + 3 * 3 * 64; \
asm ( \
"prfd PLDL2STRM, p5, [%[fetchptr], -4, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 12, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 16, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 20, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 24, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 28, mul vl] \n\t" \
: \
: [fetchptr] "r" (baseU) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_GAUGE_L1 (prefetch to L1)
#define PREFETCH_GAUGE_L1_INTERNAL_A64FXd(A) \
{ \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
asm ( \
"prfd PLDL1STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PLDL1STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PLDL1STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (baseU) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_CHI
#define LOAD_CHI_A64FXd(base) \
{ \
asm ( \
"ldr z12, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z13, [%[fetchptr], 1, mul vl] \n\t" \
"ldr z14, [%[fetchptr], 2, mul vl] \n\t" \
"ldr z15, [%[fetchptr], 3, mul vl] \n\t" \
"ldr z16, [%[fetchptr], 4, mul vl] \n\t" \
"ldr z17, [%[fetchptr], 5, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_CHIMU
#define LOAD_CHIMU_INTERLEAVED_A64FXd(base) \
{ \
asm ( \
"ptrue p5.d \n\t" \
"ldr z12, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z21, [%[fetchptr], 3, mul vl] \n\t" \
"ldr z15, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z18, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z13, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z22, [%[fetchptr], 4, mul vl] \n\t" \
"ldr z16, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z19, [%[fetchptr], 1, mul vl] \n\t" \
"ldr z14, [%[fetchptr], -4, mul vl] \n\t" \
"ldr z23, [%[fetchptr], 5, mul vl] \n\t" \
"ldr z17, [%[fetchptr], -1, mul vl] \n\t" \
"ldr z20, [%[fetchptr], 2, mul vl] \n\t" \
: \
: [fetchptr] "r" (base + 2 * 3 * 64) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_CHIMU_0213
#define LOAD_CHIMU_0213_A64FXd \
{ \
const SiteSpinor & ref(in[offset]); \
asm ( \
"ptrue p5.d \n\t" \
"ldr z12, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z18, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z13, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z19, [%[fetchptr], 1, mul vl] \n\t" \
"ldr z14, [%[fetchptr], -4, mul vl] \n\t" \
"ldr z20, [%[fetchptr], 2, mul vl] \n\t" \
"ldr z15, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z21, [%[fetchptr], 3, mul vl] \n\t" \
"ldr z16, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z22, [%[fetchptr], 4, mul vl] \n\t" \
"ldr z17, [%[fetchptr], -1, mul vl] \n\t" \
"ldr z23, [%[fetchptr], 5, mul vl] \n\t" \
: \
: [fetchptr] "r" (&ref[2][0]) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_CHIMU_0312
#define LOAD_CHIMU_0312_A64FXd \
{ \
const SiteSpinor & ref(in[offset]); \
asm ( \
"ptrue p5.d \n\t" \
"ldr z12, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z21, [%[fetchptr], 3, mul vl] \n\t" \
"ldr z13, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z22, [%[fetchptr], 4, mul vl] \n\t" \
"ldr z14, [%[fetchptr], -4, mul vl] \n\t" \
"ldr z23, [%[fetchptr], 5, mul vl] \n\t" \
"ldr z15, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z18, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z16, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z19, [%[fetchptr], 1, mul vl] \n\t" \
"ldr z17, [%[fetchptr], -1, mul vl] \n\t" \
"ldr z20, [%[fetchptr], 2, mul vl] \n\t" \
: \
: [fetchptr] "r" (&ref[2][0]) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_TABLE0
#define LOAD_TABLE0 \
asm ( \
"ldr z30, [%[tableptr], %[index], mul vl] \n\t" \
: \
: [tableptr] "r" (&lut[0]),[index] "i" (0) \
: "memory","cc","p5","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// LOAD_TABLE1
#define LOAD_TABLE1 \
asm ( \
"ldr z30, [%[tableptr], %[index], mul vl] \n\t" \
: \
: [tableptr] "r" (&lut[0]),[index] "i" (1) \
: "memory","cc","p5","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// LOAD_TABLE2
#define LOAD_TABLE2 \
asm ( \
"ldr z30, [%[tableptr], %[index], mul vl] \n\t" \
: \
: [tableptr] "r" (&lut[0]),[index] "i" (2) \
: "memory","cc","p5","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// LOAD_TABLE3
#define LOAD_TABLE3 \
asm ( \
"ldr z30, [%[tableptr], %[index], mul vl] \n\t" \
: \
: [tableptr] "r" (&lut[0]),[index] "i" (3) \
: "memory","cc","p5","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// PERMUTE
#define PERMUTE_A64FXd \
asm ( \
"tbl z12.d, { z12.d }, z30.d \n\t" \
"tbl z13.d, { z13.d }, z30.d \n\t" \
"tbl z14.d, { z14.d }, z30.d \n\t" \
"tbl z15.d, { z15.d }, z30.d \n\t" \
"tbl z16.d, { z16.d }, z30.d \n\t" \
"tbl z17.d, { z17.d }, z30.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// LOAD_GAUGE
#define LOAD_GAUGE \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
{ \
asm ( \
"ptrue p5.d \n\t" \
"ldr z24, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z25, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z26, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z27, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z28, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z29, [%[fetchptr], 1, mul vl] \n\t" \
: \
: [fetchptr] "r" (baseU + 2 * 3 * 64) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// MULT_2SPIN
#define MULT_2SPIN_1_A64FXd(A) \
{ \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
asm ( \
"ldr z24, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z25, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z26, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z27, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z28, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z29, [%[fetchptr], 1, mul vl] \n\t" \
"movprfx z18.d, p5/m, z31.d \n\t" \
"fcmla z18.d, p5/m, z24.d, z12.d, 0 \n\t" \
"movprfx z21.d, p5/m, z31.d \n\t" \
"fcmla z21.d, p5/m, z24.d, z15.d, 0 \n\t" \
"movprfx z19.d, p5/m, z31.d \n\t" \
"fcmla z19.d, p5/m, z25.d, z12.d, 0 \n\t" \
"movprfx z22.d, p5/m, z31.d \n\t" \
"fcmla z22.d, p5/m, z25.d, z15.d, 0 \n\t" \
"movprfx z20.d, p5/m, z31.d \n\t" \
"fcmla z20.d, p5/m, z26.d, z12.d, 0 \n\t" \
"movprfx z23.d, p5/m, z31.d \n\t" \
"fcmla z23.d, p5/m, z26.d, z15.d, 0 \n\t" \
"fcmla z18.d, p5/m, z24.d, z12.d, 90 \n\t" \
"fcmla z21.d, p5/m, z24.d, z15.d, 90 \n\t" \
"fcmla z19.d, p5/m, z25.d, z12.d, 90 \n\t" \
"fcmla z22.d, p5/m, z25.d, z15.d, 90 \n\t" \
"fcmla z20.d, p5/m, z26.d, z12.d, 90 \n\t" \
"fcmla z23.d, p5/m, z26.d, z15.d, 90 \n\t" \
"ldr z24, [%[fetchptr], -4, mul vl] \n\t" \
"ldr z25, [%[fetchptr], -1, mul vl] \n\t" \
"ldr z26, [%[fetchptr], 2, mul vl] \n\t" \
: \
: [fetchptr] "r" (baseU + 2 * 3 * 64) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// MULT_2SPIN_BACKEND
#define MULT_2SPIN_2_A64FXd \
{ \
asm ( \
"fcmla z18.d, p5/m, z27.d, z13.d, 0 \n\t" \
"fcmla z21.d, p5/m, z27.d, z16.d, 0 \n\t" \
"fcmla z19.d, p5/m, z28.d, z13.d, 0 \n\t" \
"fcmla z22.d, p5/m, z28.d, z16.d, 0 \n\t" \
"fcmla z20.d, p5/m, z29.d, z13.d, 0 \n\t" \
"fcmla z23.d, p5/m, z29.d, z16.d, 0 \n\t" \
"fcmla z18.d, p5/m, z27.d, z13.d, 90 \n\t" \
"fcmla z21.d, p5/m, z27.d, z16.d, 90 \n\t" \
"fcmla z19.d, p5/m, z28.d, z13.d, 90 \n\t" \
"fcmla z22.d, p5/m, z28.d, z16.d, 90 \n\t" \
"fcmla z20.d, p5/m, z29.d, z13.d, 90 \n\t" \
"fcmla z23.d, p5/m, z29.d, z16.d, 90 \n\t" \
"fcmla z18.d, p5/m, z24.d, z14.d, 0 \n\t" \
"fcmla z21.d, p5/m, z24.d, z17.d, 0 \n\t" \
"fcmla z19.d, p5/m, z25.d, z14.d, 0 \n\t" \
"fcmla z22.d, p5/m, z25.d, z17.d, 0 \n\t" \
"fcmla z20.d, p5/m, z26.d, z14.d, 0 \n\t" \
"fcmla z23.d, p5/m, z26.d, z17.d, 0 \n\t" \
"fcmla z18.d, p5/m, z24.d, z14.d, 90 \n\t" \
"fcmla z21.d, p5/m, z24.d, z17.d, 90 \n\t" \
"fcmla z19.d, p5/m, z25.d, z14.d, 90 \n\t" \
"fcmla z22.d, p5/m, z25.d, z17.d, 90 \n\t" \
"fcmla z20.d, p5/m, z26.d, z14.d, 90 \n\t" \
"fcmla z23.d, p5/m, z26.d, z17.d, 90 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XP_PROJ
#define XP_PROJ_A64FXd \
{ \
asm ( \
"fcadd z12.d, p5/m, z12.d, z21.d, 90 \n\t" \
"fcadd z13.d, p5/m, z13.d, z22.d, 90 \n\t" \
"fcadd z14.d, p5/m, z14.d, z23.d, 90 \n\t" \
"fcadd z15.d, p5/m, z15.d, z18.d, 90 \n\t" \
"fcadd z16.d, p5/m, z16.d, z19.d, 90 \n\t" \
"fcadd z17.d, p5/m, z17.d, z20.d, 90 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XP_RECON
#define XP_RECON_A64FXd \
asm ( \
"movprfx z6.d, p5/m, z31.d \n\t" \
"fcadd z6.d, p5/m, z6.d, z21.d, 270 \n\t" \
"movprfx z7.d, p5/m, z31.d \n\t" \
"fcadd z7.d, p5/m, z7.d, z22.d, 270 \n\t" \
"movprfx z8.d, p5/m, z31.d \n\t" \
"fcadd z8.d, p5/m, z8.d, z23.d, 270 \n\t" \
"movprfx z9.d, p5/m, z31.d \n\t" \
"fcadd z9.d, p5/m, z9.d, z18.d, 270 \n\t" \
"movprfx z10.d, p5/m, z31.d \n\t" \
"fcadd z10.d, p5/m, z10.d, z19.d, 270 \n\t" \
"movprfx z11.d, p5/m, z31.d \n\t" \
"fcadd z11.d, p5/m, z11.d, z20.d, 270 \n\t" \
"mov z0.d, p5/m, z18.d \n\t" \
"mov z1.d, p5/m, z19.d \n\t" \
"mov z2.d, p5/m, z20.d \n\t" \
"mov z3.d, p5/m, z21.d \n\t" \
"mov z4.d, p5/m, z22.d \n\t" \
"mov z5.d, p5/m, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// XP_RECON_ACCUM
#define XP_RECON_ACCUM_A64FXd \
asm ( \
"fcadd z9.d, p5/m, z9.d, z18.d, 270 \n\t" \
"fadd z0.d, p5/m, z0.d, z18.d \n\t" \
"fcadd z10.d, p5/m, z10.d, z19.d, 270 \n\t" \
"fadd z1.d, p5/m, z1.d, z19.d \n\t" \
"fcadd z11.d, p5/m, z11.d, z20.d, 270 \n\t" \
"fadd z2.d, p5/m, z2.d, z20.d \n\t" \
"fcadd z6.d, p5/m, z6.d, z21.d, 270 \n\t" \
"fadd z3.d, p5/m, z3.d, z21.d \n\t" \
"fcadd z7.d, p5/m, z7.d, z22.d, 270 \n\t" \
"fadd z4.d, p5/m, z4.d, z22.d \n\t" \
"fcadd z8.d, p5/m, z8.d, z23.d, 270 \n\t" \
"fadd z5.d, p5/m, z5.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// YP_PROJ
#define YP_PROJ_A64FXd \
{ \
asm ( \
"fsub z12.d, p5/m, z12.d, z21.d \n\t" \
"fsub z13.d, p5/m, z13.d, z22.d \n\t" \
"fsub z14.d, p5/m, z14.d, z23.d \n\t" \
"fadd z15.d, p5/m, z15.d, z18.d \n\t" \
"fadd z16.d, p5/m, z16.d, z19.d \n\t" \
"fadd z17.d, p5/m, z17.d, z20.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// ZP_PROJ
#define ZP_PROJ_A64FXd \
{ \
asm ( \
"fcadd z12.d, p5/m, z12.d, z18.d, 90 \n\t" \
"fcadd z13.d, p5/m, z13.d, z19.d, 90 \n\t" \
"fcadd z14.d, p5/m, z14.d, z20.d, 90 \n\t" \
"fcadd z15.d, p5/m, z15.d, z21.d, 270 \n\t" \
"fcadd z16.d, p5/m, z16.d, z22.d, 270 \n\t" \
"fcadd z17.d, p5/m, z17.d, z23.d, 270 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// TP_PROJ
#define TP_PROJ_A64FXd \
{ \
asm ( \
"fadd z12.d, p5/m, z12.d, z18.d \n\t" \
"fadd z13.d, p5/m, z13.d, z19.d \n\t" \
"fadd z14.d, p5/m, z14.d, z20.d \n\t" \
"fadd z15.d, p5/m, z15.d, z21.d \n\t" \
"fadd z16.d, p5/m, z16.d, z22.d \n\t" \
"fadd z17.d, p5/m, z17.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XM_PROJ
#define XM_PROJ_A64FXd \
{ \
asm ( \
"fcadd z12.d, p5/m, z12.d, z21.d, 270 \n\t" \
"fcadd z13.d, p5/m, z13.d, z22.d, 270 \n\t" \
"fcadd z14.d, p5/m, z14.d, z23.d, 270 \n\t" \
"fcadd z15.d, p5/m, z15.d, z18.d, 270 \n\t" \
"fcadd z16.d, p5/m, z16.d, z19.d, 270 \n\t" \
"fcadd z17.d, p5/m, z17.d, z20.d, 270 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XM_RECON
#define XM_RECON_A64FXd \
asm ( \
"movprfx z6.d, p5/m, z31.d \n\t" \
"fcadd z6.d, p5/m, z6.d, z21.d, 90 \n\t" \
"movprfx z7.d, p5/m, z31.d \n\t" \
"fcadd z7.d, p5/m, z7.d, z22.d, 90 \n\t" \
"movprfx z8.d, p5/m, z31.d \n\t" \
"fcadd z8.d, p5/m, z8.d, z23.d, 90 \n\t" \
"movprfx z9.d, p5/m, z31.d \n\t" \
"fcadd z9.d, p5/m, z9.d, z18.d, 90 \n\t" \
"movprfx z10.d, p5/m, z31.d \n\t" \
"fcadd z10.d, p5/m, z10.d, z19.d, 90 \n\t" \
"movprfx z11.d, p5/m, z31.d \n\t" \
"fcadd z11.d, p5/m, z11.d, z20.d, 90 \n\t" \
"mov z0.d, p5/m, z18.d \n\t" \
"mov z1.d, p5/m, z19.d \n\t" \
"mov z2.d, p5/m, z20.d \n\t" \
"mov z3.d, p5/m, z21.d \n\t" \
"mov z4.d, p5/m, z22.d \n\t" \
"mov z5.d, p5/m, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// YM_PROJ
#define YM_PROJ_A64FXd \
{ \
asm ( \
"fadd z12.d, p5/m, z12.d, z21.d \n\t" \
"fadd z13.d, p5/m, z13.d, z22.d \n\t" \
"fadd z14.d, p5/m, z14.d, z23.d \n\t" \
"fsub z15.d, p5/m, z15.d, z18.d \n\t" \
"fsub z16.d, p5/m, z16.d, z19.d \n\t" \
"fsub z17.d, p5/m, z17.d, z20.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// ZM_PROJ
#define ZM_PROJ_A64FXd \
{ \
asm ( \
"fcadd z12.d, p5/m, z12.d, z18.d, 270 \n\t" \
"fcadd z13.d, p5/m, z13.d, z19.d, 270 \n\t" \
"fcadd z14.d, p5/m, z14.d, z20.d, 270 \n\t" \
"fcadd z15.d, p5/m, z15.d, z21.d, 90 \n\t" \
"fcadd z16.d, p5/m, z16.d, z22.d, 90 \n\t" \
"fcadd z17.d, p5/m, z17.d, z23.d, 90 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// TM_PROJ
#define TM_PROJ_A64FXd \
{ \
asm ( \
"ptrue p5.d \n\t" \
"fsub z12.d, p5/m, z12.d, z18.d \n\t" \
"fsub z13.d, p5/m, z13.d, z19.d \n\t" \
"fsub z14.d, p5/m, z14.d, z20.d \n\t" \
"fsub z15.d, p5/m, z15.d, z21.d \n\t" \
"fsub z16.d, p5/m, z16.d, z22.d \n\t" \
"fsub z17.d, p5/m, z17.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XM_RECON_ACCUM
#define XM_RECON_ACCUM_A64FXd \
asm ( \
"fcadd z9.d, p5/m, z9.d, z18.d, 90 \n\t" \
"fcadd z10.d, p5/m, z10.d, z19.d, 90 \n\t" \
"fcadd z11.d, p5/m, z11.d, z20.d, 90 \n\t" \
"fcadd z6.d, p5/m, z6.d, z21.d, 90 \n\t" \
"fcadd z7.d, p5/m, z7.d, z22.d, 90 \n\t" \
"fcadd z8.d, p5/m, z8.d, z23.d, 90 \n\t" \
"fadd z0.d, p5/m, z0.d, z18.d \n\t" \
"fadd z1.d, p5/m, z1.d, z19.d \n\t" \
"fadd z2.d, p5/m, z2.d, z20.d \n\t" \
"fadd z3.d, p5/m, z3.d, z21.d \n\t" \
"fadd z4.d, p5/m, z4.d, z22.d \n\t" \
"fadd z5.d, p5/m, z5.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// YP_RECON_ACCUM
#define YP_RECON_ACCUM_A64FXd \
asm ( \
"fadd z0.d, p5/m, z0.d, z18.d \n\t" \
"fsub z9.d, p5/m, z9.d, z18.d \n\t" \
"fadd z1.d, p5/m, z1.d, z19.d \n\t" \
"fsub z10.d, p5/m, z10.d, z19.d \n\t" \
"fadd z2.d, p5/m, z2.d, z20.d \n\t" \
"fsub z11.d, p5/m, z11.d, z20.d \n\t" \
"fadd z3.d, p5/m, z3.d, z21.d \n\t" \
"fadd z6.d, p5/m, z6.d, z21.d \n\t" \
"fadd z4.d, p5/m, z4.d, z22.d \n\t" \
"fadd z7.d, p5/m, z7.d, z22.d \n\t" \
"fadd z5.d, p5/m, z5.d, z23.d \n\t" \
"fadd z8.d, p5/m, z8.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// YM_RECON_ACCUM
#define YM_RECON_ACCUM_A64FXd \
asm ( \
"fadd z0.d, p5/m, z0.d, z18.d \n\t" \
"fadd z9.d, p5/m, z9.d, z18.d \n\t" \
"fadd z1.d, p5/m, z1.d, z19.d \n\t" \
"fadd z10.d, p5/m, z10.d, z19.d \n\t" \
"fadd z2.d, p5/m, z2.d, z20.d \n\t" \
"fadd z11.d, p5/m, z11.d, z20.d \n\t" \
"fadd z3.d, p5/m, z3.d, z21.d \n\t" \
"fsub z6.d, p5/m, z6.d, z21.d \n\t" \
"fadd z4.d, p5/m, z4.d, z22.d \n\t" \
"fsub z7.d, p5/m, z7.d, z22.d \n\t" \
"fadd z5.d, p5/m, z5.d, z23.d \n\t" \
"fsub z8.d, p5/m, z8.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// ZP_RECON_ACCUM
#define ZP_RECON_ACCUM_A64FXd \
asm ( \
"fcadd z6.d, p5/m, z6.d, z18.d, 270 \n\t" \
"fadd z0.d, p5/m, z0.d, z18.d \n\t" \
"fcadd z7.d, p5/m, z7.d, z19.d, 270 \n\t" \
"fadd z1.d, p5/m, z1.d, z19.d \n\t" \
"fcadd z8.d, p5/m, z8.d, z20.d, 270 \n\t" \
"fadd z2.d, p5/m, z2.d, z20.d \n\t" \
"fcadd z9.d, p5/m, z9.d, z21.d, 90 \n\t" \
"fadd z3.d, p5/m, z3.d, z21.d \n\t" \
"fcadd z10.d, p5/m, z10.d, z22.d, 90 \n\t" \
"fadd z4.d, p5/m, z4.d, z22.d \n\t" \
"fcadd z11.d, p5/m, z11.d, z23.d, 90 \n\t" \
"fadd z5.d, p5/m, z5.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// ZM_RECON_ACCUM
#define ZM_RECON_ACCUM_A64FXd \
asm ( \
"fcadd z6.d, p5/m, z6.d, z18.d, 90 \n\t" \
"fadd z0.d, p5/m, z0.d, z18.d \n\t" \
"fcadd z7.d, p5/m, z7.d, z19.d, 90 \n\t" \
"fadd z1.d, p5/m, z1.d, z19.d \n\t" \
"fcadd z8.d, p5/m, z8.d, z20.d, 90 \n\t" \
"fadd z2.d, p5/m, z2.d, z20.d \n\t" \
"fcadd z9.d, p5/m, z9.d, z21.d, 270 \n\t" \
"fadd z3.d, p5/m, z3.d, z21.d \n\t" \
"fcadd z10.d, p5/m, z10.d, z22.d, 270 \n\t" \
"fadd z4.d, p5/m, z4.d, z22.d \n\t" \
"fcadd z11.d, p5/m, z11.d, z23.d, 270 \n\t" \
"fadd z5.d, p5/m, z5.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// TP_RECON_ACCUM
#define TP_RECON_ACCUM_A64FXd \
asm ( \
"fadd z0.d, p5/m, z0.d, z18.d \n\t" \
"fadd z6.d, p5/m, z6.d, z18.d \n\t" \
"fadd z1.d, p5/m, z1.d, z19.d \n\t" \
"fadd z7.d, p5/m, z7.d, z19.d \n\t" \
"fadd z2.d, p5/m, z2.d, z20.d \n\t" \
"fadd z8.d, p5/m, z8.d, z20.d \n\t" \
"fadd z3.d, p5/m, z3.d, z21.d \n\t" \
"fadd z9.d, p5/m, z9.d, z21.d \n\t" \
"fadd z4.d, p5/m, z4.d, z22.d \n\t" \
"fadd z10.d, p5/m, z10.d, z22.d \n\t" \
"fadd z5.d, p5/m, z5.d, z23.d \n\t" \
"fadd z11.d, p5/m, z11.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// TM_RECON_ACCUM
#define TM_RECON_ACCUM_A64FXd \
asm ( \
"fadd z0.d, p5/m, z0.d, z18.d \n\t" \
"fsub z6.d, p5/m, z6.d, z18.d \n\t" \
"fadd z1.d, p5/m, z1.d, z19.d \n\t" \
"fsub z7.d, p5/m, z7.d, z19.d \n\t" \
"fadd z2.d, p5/m, z2.d, z20.d \n\t" \
"fsub z8.d, p5/m, z8.d, z20.d \n\t" \
"fadd z3.d, p5/m, z3.d, z21.d \n\t" \
"fsub z9.d, p5/m, z9.d, z21.d \n\t" \
"fadd z4.d, p5/m, z4.d, z22.d \n\t" \
"fsub z10.d, p5/m, z10.d, z22.d \n\t" \
"fadd z5.d, p5/m, z5.d, z23.d \n\t" \
"fsub z11.d, p5/m, z11.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// ZERO_PSI
#define ZERO_PSI_A64FXd \
asm ( \
"ptrue p5.d \n\t" \
"fmov z0.d , 0 \n\t" \
"fmov z1.d , 0 \n\t" \
"fmov z2.d , 0 \n\t" \
"fmov z3.d , 0 \n\t" \
"fmov z4.d , 0 \n\t" \
"fmov z5.d , 0 \n\t" \
"fmov z6.d , 0 \n\t" \
"fmov z7.d , 0 \n\t" \
"fmov z8.d , 0 \n\t" \
"fmov z9.d , 0 \n\t" \
"fmov z10.d , 0 \n\t" \
"fmov z11.d , 0 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// PREFETCH_RESULT_L2_STORE (prefetch store to L2)
#define PREFETCH_RESULT_L2_STORE_INTERNAL_A64FXd(base) \
{ \
asm ( \
"prfd PSTL2STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PSTL2STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PSTL2STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_RESULT_L1_STORE (prefetch store to L1)
#define PREFETCH_RESULT_L1_STORE_INTERNAL_A64FXd(base) \
{ \
asm ( \
"prfd PSTL1STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PSTL1STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PSTL1STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// ADD_RESULT_INTERNAL
#define ADD_RESULT_INTERNAL_A64FXd \
asm ( \
"fadd z0.d, p5/m, z0.d, z12.d \n\t" \
"fadd z1.d, p5/m, z1.d, z13.d \n\t" \
"fadd z2.d, p5/m, z2.d, z14.d \n\t" \
"fadd z3.d, p5/m, z3.d, z15.d \n\t" \
"fadd z4.d, p5/m, z4.d, z16.d \n\t" \
"fadd z5.d, p5/m, z5.d, z17.d \n\t" \
"fadd z6.d, p5/m, z6.d, z18.d \n\t" \
"fadd z7.d, p5/m, z7.d, z19.d \n\t" \
"fadd z8.d, p5/m, z8.d, z20.d \n\t" \
"fadd z9.d, p5/m, z9.d, z21.d \n\t" \
"fadd z10.d, p5/m, z10.d, z22.d \n\t" \
"fadd z11.d, p5/m, z11.d, z23.d \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);

View File

@ -0,0 +1,779 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: Fujitsu_A64FX_asm_single.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#define LOAD_CHIMU(base) LOAD_CHIMU_INTERLEAVED_A64FXf(base)
#define PREFETCH_CHIMU_L1(A) PREFETCH_CHIMU_L1_INTERNAL_A64FXf(A)
#define PREFETCH_GAUGE_L1(A) PREFETCH_GAUGE_L1_INTERNAL_A64FXf(A)
#define PREFETCH_CHIMU_L2(A) PREFETCH_CHIMU_L2_INTERNAL_A64FXf(A)
#define PREFETCH_GAUGE_L2(A) PREFETCH_GAUGE_L2_INTERNAL_A64FXf(A)
#define PF_GAUGE(A)
#define PREFETCH_RESULT_L2_STORE(A) PREFETCH_RESULT_L2_STORE_INTERNAL_A64FXf(A)
#define PREFETCH_RESULT_L1_STORE(A) PREFETCH_RESULT_L1_STORE_INTERNAL_A64FXf(A)
#define PREFETCH1_CHIMU(A) PREFETCH_CHIMU_L1(A)
#define PREFETCH_CHIMU(A) PREFETCH_CHIMU_L1(A)
#define LOCK_GAUGE(A)
#define UNLOCK_GAUGE(A)
#define MASK_REGS DECLARATIONS_A64FXf
#define SAVE_RESULT(A,B) RESULT_A64FXf(A); PREFETCH_RESULT_L2_STORE(B)
#define MULT_2SPIN_1(Dir) MULT_2SPIN_1_A64FXf(Dir)
#define MULT_2SPIN_2 MULT_2SPIN_2_A64FXf
#define LOAD_CHI(base) LOAD_CHI_A64FXf(base)
#define ADD_RESULT(base,basep) LOAD_CHIMU(base); ADD_RESULT_INTERNAL_A64FXf; RESULT_A64FXf(base)
#define XP_PROJ XP_PROJ_A64FXf
#define YP_PROJ YP_PROJ_A64FXf
#define ZP_PROJ ZP_PROJ_A64FXf
#define TP_PROJ TP_PROJ_A64FXf
#define XM_PROJ XM_PROJ_A64FXf
#define YM_PROJ YM_PROJ_A64FXf
#define ZM_PROJ ZM_PROJ_A64FXf
#define TM_PROJ TM_PROJ_A64FXf
#define XP_RECON XP_RECON_A64FXf
#define XM_RECON XM_RECON_A64FXf
#define XM_RECON_ACCUM XM_RECON_ACCUM_A64FXf
#define YM_RECON_ACCUM YM_RECON_ACCUM_A64FXf
#define ZM_RECON_ACCUM ZM_RECON_ACCUM_A64FXf
#define TM_RECON_ACCUM TM_RECON_ACCUM_A64FXf
#define XP_RECON_ACCUM XP_RECON_ACCUM_A64FXf
#define YP_RECON_ACCUM YP_RECON_ACCUM_A64FXf
#define ZP_RECON_ACCUM ZP_RECON_ACCUM_A64FXf
#define TP_RECON_ACCUM TP_RECON_ACCUM_A64FXf
#define PERMUTE_DIR0 0
#define PERMUTE_DIR1 1
#define PERMUTE_DIR2 2
#define PERMUTE_DIR3 3
#define PERMUTE PERMUTE_A64FXf;
#define LOAD_TABLE(Dir) if (Dir == 0) { LOAD_TABLE0; } else if (Dir == 1) { LOAD_TABLE1 } else if (Dir == 2) { LOAD_TABLE2; } else if (Dir == 3) { LOAD_TABLE3; }
#define MAYBEPERM(A,perm) if (perm) { PERMUTE; }
// DECLARATIONS
#define DECLARATIONS_A64FXf \
const uint32_t lut[4][16] = { \
{8, 9, 10, 11, 12, 13, 14, 15, 0, 1, 2, 3, 4, 5, 6, 7}, \
{4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11}, \
{2, 3, 0, 1, 6, 7, 4, 5, 10, 11, 8, 9, 14, 15, 12, 13}, \
{1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14} }; \
asm ( \
"fmov z31.s , 0 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// RESULT
#define RESULT_A64FXf(base) \
{ \
asm ( \
"str z0, [%[storeptr], -6, mul vl] \n\t" \
"str z1, [%[storeptr], -5, mul vl] \n\t" \
"str z2, [%[storeptr], -4, mul vl] \n\t" \
"str z3, [%[storeptr], -3, mul vl] \n\t" \
"str z4, [%[storeptr], -2, mul vl] \n\t" \
"str z5, [%[storeptr], -1, mul vl] \n\t" \
"str z6, [%[storeptr], 0, mul vl] \n\t" \
"str z7, [%[storeptr], 1, mul vl] \n\t" \
"str z8, [%[storeptr], 2, mul vl] \n\t" \
"str z9, [%[storeptr], 3, mul vl] \n\t" \
"str z10, [%[storeptr], 4, mul vl] \n\t" \
"str z11, [%[storeptr], 5, mul vl] \n\t" \
: \
: [storeptr] "r" (base + 2 * 3 * 64) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_CHIMU_L2 (prefetch to L2)
#define PREFETCH_CHIMU_L2_INTERNAL_A64FXf(base) \
{ \
asm ( \
"prfd PLDL2STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_CHIMU_L1 (prefetch to L1)
#define PREFETCH_CHIMU_L1_INTERNAL_A64FXf(base) \
{ \
asm ( \
"prfd PLDL1STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PLDL1STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PLDL1STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_GAUGE_L2 (prefetch to L2)
#define PREFETCH_GAUGE_L2_INTERNAL_A64FXf(A) \
{ \
const auto & ref(U[sUn](A)); uint64_t baseU = (uint64_t)&ref + 3 * 3 * 64; \
asm ( \
"prfd PLDL2STRM, p5, [%[fetchptr], -4, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 12, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 16, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 20, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 24, mul vl] \n\t" \
"prfd PLDL2STRM, p5, [%[fetchptr], 28, mul vl] \n\t" \
: \
: [fetchptr] "r" (baseU) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_GAUGE_L1 (prefetch to L1)
#define PREFETCH_GAUGE_L1_INTERNAL_A64FXf(A) \
{ \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
asm ( \
"prfd PLDL1STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PLDL1STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PLDL1STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (baseU) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_CHI
#define LOAD_CHI_A64FXf(base) \
{ \
asm ( \
"ldr z12, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z13, [%[fetchptr], 1, mul vl] \n\t" \
"ldr z14, [%[fetchptr], 2, mul vl] \n\t" \
"ldr z15, [%[fetchptr], 3, mul vl] \n\t" \
"ldr z16, [%[fetchptr], 4, mul vl] \n\t" \
"ldr z17, [%[fetchptr], 5, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_CHIMU
#define LOAD_CHIMU_INTERLEAVED_A64FXf(base) \
{ \
asm ( \
"ptrue p5.s \n\t" \
"ldr z12, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z21, [%[fetchptr], 3, mul vl] \n\t" \
"ldr z15, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z18, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z13, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z22, [%[fetchptr], 4, mul vl] \n\t" \
"ldr z16, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z19, [%[fetchptr], 1, mul vl] \n\t" \
"ldr z14, [%[fetchptr], -4, mul vl] \n\t" \
"ldr z23, [%[fetchptr], 5, mul vl] \n\t" \
"ldr z17, [%[fetchptr], -1, mul vl] \n\t" \
"ldr z20, [%[fetchptr], 2, mul vl] \n\t" \
: \
: [fetchptr] "r" (base + 2 * 3 * 64) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_CHIMU_0213
#define LOAD_CHIMU_0213_A64FXf \
{ \
const SiteSpinor & ref(in[offset]); \
asm ( \
"ptrue p5.s \n\t" \
"ldr z12, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z18, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z13, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z19, [%[fetchptr], 1, mul vl] \n\t" \
"ldr z14, [%[fetchptr], -4, mul vl] \n\t" \
"ldr z20, [%[fetchptr], 2, mul vl] \n\t" \
"ldr z15, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z21, [%[fetchptr], 3, mul vl] \n\t" \
"ldr z16, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z22, [%[fetchptr], 4, mul vl] \n\t" \
"ldr z17, [%[fetchptr], -1, mul vl] \n\t" \
"ldr z23, [%[fetchptr], 5, mul vl] \n\t" \
: \
: [fetchptr] "r" (&ref[2][0]) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_CHIMU_0312
#define LOAD_CHIMU_0312_A64FXf \
{ \
const SiteSpinor & ref(in[offset]); \
asm ( \
"ptrue p5.s \n\t" \
"ldr z12, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z21, [%[fetchptr], 3, mul vl] \n\t" \
"ldr z13, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z22, [%[fetchptr], 4, mul vl] \n\t" \
"ldr z14, [%[fetchptr], -4, mul vl] \n\t" \
"ldr z23, [%[fetchptr], 5, mul vl] \n\t" \
"ldr z15, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z18, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z16, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z19, [%[fetchptr], 1, mul vl] \n\t" \
"ldr z17, [%[fetchptr], -1, mul vl] \n\t" \
"ldr z20, [%[fetchptr], 2, mul vl] \n\t" \
: \
: [fetchptr] "r" (&ref[2][0]) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// LOAD_TABLE0
#define LOAD_TABLE0 \
asm ( \
"ldr z30, [%[tableptr], %[index], mul vl] \n\t" \
: \
: [tableptr] "r" (&lut[0]),[index] "i" (0) \
: "memory","cc","p5","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// LOAD_TABLE1
#define LOAD_TABLE1 \
asm ( \
"ldr z30, [%[tableptr], %[index], mul vl] \n\t" \
: \
: [tableptr] "r" (&lut[0]),[index] "i" (1) \
: "memory","cc","p5","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// LOAD_TABLE2
#define LOAD_TABLE2 \
asm ( \
"ldr z30, [%[tableptr], %[index], mul vl] \n\t" \
: \
: [tableptr] "r" (&lut[0]),[index] "i" (2) \
: "memory","cc","p5","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// LOAD_TABLE3
#define LOAD_TABLE3 \
asm ( \
"ldr z30, [%[tableptr], %[index], mul vl] \n\t" \
: \
: [tableptr] "r" (&lut[0]),[index] "i" (3) \
: "memory","cc","p5","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// PERMUTE
#define PERMUTE_A64FXf \
asm ( \
"tbl z12.s, { z12.s }, z30.s \n\t" \
"tbl z13.s, { z13.s }, z30.s \n\t" \
"tbl z14.s, { z14.s }, z30.s \n\t" \
"tbl z15.s, { z15.s }, z30.s \n\t" \
"tbl z16.s, { z16.s }, z30.s \n\t" \
"tbl z17.s, { z17.s }, z30.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// LOAD_GAUGE
#define LOAD_GAUGE \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
{ \
asm ( \
"ptrue p5.s \n\t" \
"ldr z24, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z25, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z26, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z27, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z28, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z29, [%[fetchptr], 1, mul vl] \n\t" \
: \
: [fetchptr] "r" (baseU + 2 * 3 * 64) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// MULT_2SPIN
#define MULT_2SPIN_1_A64FXf(A) \
{ \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
asm ( \
"ldr z24, [%[fetchptr], -6, mul vl] \n\t" \
"ldr z25, [%[fetchptr], -3, mul vl] \n\t" \
"ldr z26, [%[fetchptr], 0, mul vl] \n\t" \
"ldr z27, [%[fetchptr], -5, mul vl] \n\t" \
"ldr z28, [%[fetchptr], -2, mul vl] \n\t" \
"ldr z29, [%[fetchptr], 1, mul vl] \n\t" \
"movprfx z18.s, p5/m, z31.s \n\t" \
"fcmla z18.s, p5/m, z24.s, z12.s, 0 \n\t" \
"movprfx z21.s, p5/m, z31.s \n\t" \
"fcmla z21.s, p5/m, z24.s, z15.s, 0 \n\t" \
"movprfx z19.s, p5/m, z31.s \n\t" \
"fcmla z19.s, p5/m, z25.s, z12.s, 0 \n\t" \
"movprfx z22.s, p5/m, z31.s \n\t" \
"fcmla z22.s, p5/m, z25.s, z15.s, 0 \n\t" \
"movprfx z20.s, p5/m, z31.s \n\t" \
"fcmla z20.s, p5/m, z26.s, z12.s, 0 \n\t" \
"movprfx z23.s, p5/m, z31.s \n\t" \
"fcmla z23.s, p5/m, z26.s, z15.s, 0 \n\t" \
"fcmla z18.s, p5/m, z24.s, z12.s, 90 \n\t" \
"fcmla z21.s, p5/m, z24.s, z15.s, 90 \n\t" \
"fcmla z19.s, p5/m, z25.s, z12.s, 90 \n\t" \
"fcmla z22.s, p5/m, z25.s, z15.s, 90 \n\t" \
"fcmla z20.s, p5/m, z26.s, z12.s, 90 \n\t" \
"fcmla z23.s, p5/m, z26.s, z15.s, 90 \n\t" \
"ldr z24, [%[fetchptr], -4, mul vl] \n\t" \
"ldr z25, [%[fetchptr], -1, mul vl] \n\t" \
"ldr z26, [%[fetchptr], 2, mul vl] \n\t" \
: \
: [fetchptr] "r" (baseU + 2 * 3 * 64) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// MULT_2SPIN_BACKEND
#define MULT_2SPIN_2_A64FXf \
{ \
asm ( \
"fcmla z18.s, p5/m, z27.s, z13.s, 0 \n\t" \
"fcmla z21.s, p5/m, z27.s, z16.s, 0 \n\t" \
"fcmla z19.s, p5/m, z28.s, z13.s, 0 \n\t" \
"fcmla z22.s, p5/m, z28.s, z16.s, 0 \n\t" \
"fcmla z20.s, p5/m, z29.s, z13.s, 0 \n\t" \
"fcmla z23.s, p5/m, z29.s, z16.s, 0 \n\t" \
"fcmla z18.s, p5/m, z27.s, z13.s, 90 \n\t" \
"fcmla z21.s, p5/m, z27.s, z16.s, 90 \n\t" \
"fcmla z19.s, p5/m, z28.s, z13.s, 90 \n\t" \
"fcmla z22.s, p5/m, z28.s, z16.s, 90 \n\t" \
"fcmla z20.s, p5/m, z29.s, z13.s, 90 \n\t" \
"fcmla z23.s, p5/m, z29.s, z16.s, 90 \n\t" \
"fcmla z18.s, p5/m, z24.s, z14.s, 0 \n\t" \
"fcmla z21.s, p5/m, z24.s, z17.s, 0 \n\t" \
"fcmla z19.s, p5/m, z25.s, z14.s, 0 \n\t" \
"fcmla z22.s, p5/m, z25.s, z17.s, 0 \n\t" \
"fcmla z20.s, p5/m, z26.s, z14.s, 0 \n\t" \
"fcmla z23.s, p5/m, z26.s, z17.s, 0 \n\t" \
"fcmla z18.s, p5/m, z24.s, z14.s, 90 \n\t" \
"fcmla z21.s, p5/m, z24.s, z17.s, 90 \n\t" \
"fcmla z19.s, p5/m, z25.s, z14.s, 90 \n\t" \
"fcmla z22.s, p5/m, z25.s, z17.s, 90 \n\t" \
"fcmla z20.s, p5/m, z26.s, z14.s, 90 \n\t" \
"fcmla z23.s, p5/m, z26.s, z17.s, 90 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XP_PROJ
#define XP_PROJ_A64FXf \
{ \
asm ( \
"fcadd z12.s, p5/m, z12.s, z21.s, 90 \n\t" \
"fcadd z13.s, p5/m, z13.s, z22.s, 90 \n\t" \
"fcadd z14.s, p5/m, z14.s, z23.s, 90 \n\t" \
"fcadd z15.s, p5/m, z15.s, z18.s, 90 \n\t" \
"fcadd z16.s, p5/m, z16.s, z19.s, 90 \n\t" \
"fcadd z17.s, p5/m, z17.s, z20.s, 90 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XP_RECON
#define XP_RECON_A64FXf \
asm ( \
"movprfx z6.s, p5/m, z31.s \n\t" \
"fcadd z6.s, p5/m, z6.s, z21.s, 270 \n\t" \
"movprfx z7.s, p5/m, z31.s \n\t" \
"fcadd z7.s, p5/m, z7.s, z22.s, 270 \n\t" \
"movprfx z8.s, p5/m, z31.s \n\t" \
"fcadd z8.s, p5/m, z8.s, z23.s, 270 \n\t" \
"movprfx z9.s, p5/m, z31.s \n\t" \
"fcadd z9.s, p5/m, z9.s, z18.s, 270 \n\t" \
"movprfx z10.s, p5/m, z31.s \n\t" \
"fcadd z10.s, p5/m, z10.s, z19.s, 270 \n\t" \
"movprfx z11.s, p5/m, z31.s \n\t" \
"fcadd z11.s, p5/m, z11.s, z20.s, 270 \n\t" \
"mov z0.s, p5/m, z18.s \n\t" \
"mov z1.s, p5/m, z19.s \n\t" \
"mov z2.s, p5/m, z20.s \n\t" \
"mov z3.s, p5/m, z21.s \n\t" \
"mov z4.s, p5/m, z22.s \n\t" \
"mov z5.s, p5/m, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// XP_RECON_ACCUM
#define XP_RECON_ACCUM_A64FXf \
asm ( \
"fcadd z9.s, p5/m, z9.s, z18.s, 270 \n\t" \
"fadd z0.s, p5/m, z0.s, z18.s \n\t" \
"fcadd z10.s, p5/m, z10.s, z19.s, 270 \n\t" \
"fadd z1.s, p5/m, z1.s, z19.s \n\t" \
"fcadd z11.s, p5/m, z11.s, z20.s, 270 \n\t" \
"fadd z2.s, p5/m, z2.s, z20.s \n\t" \
"fcadd z6.s, p5/m, z6.s, z21.s, 270 \n\t" \
"fadd z3.s, p5/m, z3.s, z21.s \n\t" \
"fcadd z7.s, p5/m, z7.s, z22.s, 270 \n\t" \
"fadd z4.s, p5/m, z4.s, z22.s \n\t" \
"fcadd z8.s, p5/m, z8.s, z23.s, 270 \n\t" \
"fadd z5.s, p5/m, z5.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// YP_PROJ
#define YP_PROJ_A64FXf \
{ \
asm ( \
"fsub z12.s, p5/m, z12.s, z21.s \n\t" \
"fsub z13.s, p5/m, z13.s, z22.s \n\t" \
"fsub z14.s, p5/m, z14.s, z23.s \n\t" \
"fadd z15.s, p5/m, z15.s, z18.s \n\t" \
"fadd z16.s, p5/m, z16.s, z19.s \n\t" \
"fadd z17.s, p5/m, z17.s, z20.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// ZP_PROJ
#define ZP_PROJ_A64FXf \
{ \
asm ( \
"fcadd z12.s, p5/m, z12.s, z18.s, 90 \n\t" \
"fcadd z13.s, p5/m, z13.s, z19.s, 90 \n\t" \
"fcadd z14.s, p5/m, z14.s, z20.s, 90 \n\t" \
"fcadd z15.s, p5/m, z15.s, z21.s, 270 \n\t" \
"fcadd z16.s, p5/m, z16.s, z22.s, 270 \n\t" \
"fcadd z17.s, p5/m, z17.s, z23.s, 270 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// TP_PROJ
#define TP_PROJ_A64FXf \
{ \
asm ( \
"fadd z12.s, p5/m, z12.s, z18.s \n\t" \
"fadd z13.s, p5/m, z13.s, z19.s \n\t" \
"fadd z14.s, p5/m, z14.s, z20.s \n\t" \
"fadd z15.s, p5/m, z15.s, z21.s \n\t" \
"fadd z16.s, p5/m, z16.s, z22.s \n\t" \
"fadd z17.s, p5/m, z17.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XM_PROJ
#define XM_PROJ_A64FXf \
{ \
asm ( \
"fcadd z12.s, p5/m, z12.s, z21.s, 270 \n\t" \
"fcadd z13.s, p5/m, z13.s, z22.s, 270 \n\t" \
"fcadd z14.s, p5/m, z14.s, z23.s, 270 \n\t" \
"fcadd z15.s, p5/m, z15.s, z18.s, 270 \n\t" \
"fcadd z16.s, p5/m, z16.s, z19.s, 270 \n\t" \
"fcadd z17.s, p5/m, z17.s, z20.s, 270 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XM_RECON
#define XM_RECON_A64FXf \
asm ( \
"movprfx z6.s, p5/m, z31.s \n\t" \
"fcadd z6.s, p5/m, z6.s, z21.s, 90 \n\t" \
"movprfx z7.s, p5/m, z31.s \n\t" \
"fcadd z7.s, p5/m, z7.s, z22.s, 90 \n\t" \
"movprfx z8.s, p5/m, z31.s \n\t" \
"fcadd z8.s, p5/m, z8.s, z23.s, 90 \n\t" \
"movprfx z9.s, p5/m, z31.s \n\t" \
"fcadd z9.s, p5/m, z9.s, z18.s, 90 \n\t" \
"movprfx z10.s, p5/m, z31.s \n\t" \
"fcadd z10.s, p5/m, z10.s, z19.s, 90 \n\t" \
"movprfx z11.s, p5/m, z31.s \n\t" \
"fcadd z11.s, p5/m, z11.s, z20.s, 90 \n\t" \
"mov z0.s, p5/m, z18.s \n\t" \
"mov z1.s, p5/m, z19.s \n\t" \
"mov z2.s, p5/m, z20.s \n\t" \
"mov z3.s, p5/m, z21.s \n\t" \
"mov z4.s, p5/m, z22.s \n\t" \
"mov z5.s, p5/m, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// YM_PROJ
#define YM_PROJ_A64FXf \
{ \
asm ( \
"fadd z12.s, p5/m, z12.s, z21.s \n\t" \
"fadd z13.s, p5/m, z13.s, z22.s \n\t" \
"fadd z14.s, p5/m, z14.s, z23.s \n\t" \
"fsub z15.s, p5/m, z15.s, z18.s \n\t" \
"fsub z16.s, p5/m, z16.s, z19.s \n\t" \
"fsub z17.s, p5/m, z17.s, z20.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// ZM_PROJ
#define ZM_PROJ_A64FXf \
{ \
asm ( \
"fcadd z12.s, p5/m, z12.s, z18.s, 270 \n\t" \
"fcadd z13.s, p5/m, z13.s, z19.s, 270 \n\t" \
"fcadd z14.s, p5/m, z14.s, z20.s, 270 \n\t" \
"fcadd z15.s, p5/m, z15.s, z21.s, 90 \n\t" \
"fcadd z16.s, p5/m, z16.s, z22.s, 90 \n\t" \
"fcadd z17.s, p5/m, z17.s, z23.s, 90 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// TM_PROJ
#define TM_PROJ_A64FXf \
{ \
asm ( \
"ptrue p5.s \n\t" \
"fsub z12.s, p5/m, z12.s, z18.s \n\t" \
"fsub z13.s, p5/m, z13.s, z19.s \n\t" \
"fsub z14.s, p5/m, z14.s, z20.s \n\t" \
"fsub z15.s, p5/m, z15.s, z21.s \n\t" \
"fsub z16.s, p5/m, z16.s, z22.s \n\t" \
"fsub z17.s, p5/m, z17.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
); \
}
// XM_RECON_ACCUM
#define XM_RECON_ACCUM_A64FXf \
asm ( \
"fcadd z9.s, p5/m, z9.s, z18.s, 90 \n\t" \
"fcadd z10.s, p5/m, z10.s, z19.s, 90 \n\t" \
"fcadd z11.s, p5/m, z11.s, z20.s, 90 \n\t" \
"fcadd z6.s, p5/m, z6.s, z21.s, 90 \n\t" \
"fcadd z7.s, p5/m, z7.s, z22.s, 90 \n\t" \
"fcadd z8.s, p5/m, z8.s, z23.s, 90 \n\t" \
"fadd z0.s, p5/m, z0.s, z18.s \n\t" \
"fadd z1.s, p5/m, z1.s, z19.s \n\t" \
"fadd z2.s, p5/m, z2.s, z20.s \n\t" \
"fadd z3.s, p5/m, z3.s, z21.s \n\t" \
"fadd z4.s, p5/m, z4.s, z22.s \n\t" \
"fadd z5.s, p5/m, z5.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// YP_RECON_ACCUM
#define YP_RECON_ACCUM_A64FXf \
asm ( \
"fadd z0.s, p5/m, z0.s, z18.s \n\t" \
"fsub z9.s, p5/m, z9.s, z18.s \n\t" \
"fadd z1.s, p5/m, z1.s, z19.s \n\t" \
"fsub z10.s, p5/m, z10.s, z19.s \n\t" \
"fadd z2.s, p5/m, z2.s, z20.s \n\t" \
"fsub z11.s, p5/m, z11.s, z20.s \n\t" \
"fadd z3.s, p5/m, z3.s, z21.s \n\t" \
"fadd z6.s, p5/m, z6.s, z21.s \n\t" \
"fadd z4.s, p5/m, z4.s, z22.s \n\t" \
"fadd z7.s, p5/m, z7.s, z22.s \n\t" \
"fadd z5.s, p5/m, z5.s, z23.s \n\t" \
"fadd z8.s, p5/m, z8.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// YM_RECON_ACCUM
#define YM_RECON_ACCUM_A64FXf \
asm ( \
"fadd z0.s, p5/m, z0.s, z18.s \n\t" \
"fadd z9.s, p5/m, z9.s, z18.s \n\t" \
"fadd z1.s, p5/m, z1.s, z19.s \n\t" \
"fadd z10.s, p5/m, z10.s, z19.s \n\t" \
"fadd z2.s, p5/m, z2.s, z20.s \n\t" \
"fadd z11.s, p5/m, z11.s, z20.s \n\t" \
"fadd z3.s, p5/m, z3.s, z21.s \n\t" \
"fsub z6.s, p5/m, z6.s, z21.s \n\t" \
"fadd z4.s, p5/m, z4.s, z22.s \n\t" \
"fsub z7.s, p5/m, z7.s, z22.s \n\t" \
"fadd z5.s, p5/m, z5.s, z23.s \n\t" \
"fsub z8.s, p5/m, z8.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// ZP_RECON_ACCUM
#define ZP_RECON_ACCUM_A64FXf \
asm ( \
"fcadd z6.s, p5/m, z6.s, z18.s, 270 \n\t" \
"fadd z0.s, p5/m, z0.s, z18.s \n\t" \
"fcadd z7.s, p5/m, z7.s, z19.s, 270 \n\t" \
"fadd z1.s, p5/m, z1.s, z19.s \n\t" \
"fcadd z8.s, p5/m, z8.s, z20.s, 270 \n\t" \
"fadd z2.s, p5/m, z2.s, z20.s \n\t" \
"fcadd z9.s, p5/m, z9.s, z21.s, 90 \n\t" \
"fadd z3.s, p5/m, z3.s, z21.s \n\t" \
"fcadd z10.s, p5/m, z10.s, z22.s, 90 \n\t" \
"fadd z4.s, p5/m, z4.s, z22.s \n\t" \
"fcadd z11.s, p5/m, z11.s, z23.s, 90 \n\t" \
"fadd z5.s, p5/m, z5.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// ZM_RECON_ACCUM
#define ZM_RECON_ACCUM_A64FXf \
asm ( \
"fcadd z6.s, p5/m, z6.s, z18.s, 90 \n\t" \
"fadd z0.s, p5/m, z0.s, z18.s \n\t" \
"fcadd z7.s, p5/m, z7.s, z19.s, 90 \n\t" \
"fadd z1.s, p5/m, z1.s, z19.s \n\t" \
"fcadd z8.s, p5/m, z8.s, z20.s, 90 \n\t" \
"fadd z2.s, p5/m, z2.s, z20.s \n\t" \
"fcadd z9.s, p5/m, z9.s, z21.s, 270 \n\t" \
"fadd z3.s, p5/m, z3.s, z21.s \n\t" \
"fcadd z10.s, p5/m, z10.s, z22.s, 270 \n\t" \
"fadd z4.s, p5/m, z4.s, z22.s \n\t" \
"fcadd z11.s, p5/m, z11.s, z23.s, 270 \n\t" \
"fadd z5.s, p5/m, z5.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// TP_RECON_ACCUM
#define TP_RECON_ACCUM_A64FXf \
asm ( \
"fadd z0.s, p5/m, z0.s, z18.s \n\t" \
"fadd z6.s, p5/m, z6.s, z18.s \n\t" \
"fadd z1.s, p5/m, z1.s, z19.s \n\t" \
"fadd z7.s, p5/m, z7.s, z19.s \n\t" \
"fadd z2.s, p5/m, z2.s, z20.s \n\t" \
"fadd z8.s, p5/m, z8.s, z20.s \n\t" \
"fadd z3.s, p5/m, z3.s, z21.s \n\t" \
"fadd z9.s, p5/m, z9.s, z21.s \n\t" \
"fadd z4.s, p5/m, z4.s, z22.s \n\t" \
"fadd z10.s, p5/m, z10.s, z22.s \n\t" \
"fadd z5.s, p5/m, z5.s, z23.s \n\t" \
"fadd z11.s, p5/m, z11.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// TM_RECON_ACCUM
#define TM_RECON_ACCUM_A64FXf \
asm ( \
"fadd z0.s, p5/m, z0.s, z18.s \n\t" \
"fsub z6.s, p5/m, z6.s, z18.s \n\t" \
"fadd z1.s, p5/m, z1.s, z19.s \n\t" \
"fsub z7.s, p5/m, z7.s, z19.s \n\t" \
"fadd z2.s, p5/m, z2.s, z20.s \n\t" \
"fsub z8.s, p5/m, z8.s, z20.s \n\t" \
"fadd z3.s, p5/m, z3.s, z21.s \n\t" \
"fsub z9.s, p5/m, z9.s, z21.s \n\t" \
"fadd z4.s, p5/m, z4.s, z22.s \n\t" \
"fsub z10.s, p5/m, z10.s, z22.s \n\t" \
"fadd z5.s, p5/m, z5.s, z23.s \n\t" \
"fsub z11.s, p5/m, z11.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// ZERO_PSI
#define ZERO_PSI_A64FXf \
asm ( \
"ptrue p5.s \n\t" \
"fmov z0.s , 0 \n\t" \
"fmov z1.s , 0 \n\t" \
"fmov z2.s , 0 \n\t" \
"fmov z3.s , 0 \n\t" \
"fmov z4.s , 0 \n\t" \
"fmov z5.s , 0 \n\t" \
"fmov z6.s , 0 \n\t" \
"fmov z7.s , 0 \n\t" \
"fmov z8.s , 0 \n\t" \
"fmov z9.s , 0 \n\t" \
"fmov z10.s , 0 \n\t" \
"fmov z11.s , 0 \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);
// PREFETCH_RESULT_L2_STORE (prefetch store to L2)
#define PREFETCH_RESULT_L2_STORE_INTERNAL_A64FXf(base) \
{ \
asm ( \
"prfd PSTL2STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PSTL2STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PSTL2STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// PREFETCH_RESULT_L1_STORE (prefetch store to L1)
#define PREFETCH_RESULT_L1_STORE_INTERNAL_A64FXf(base) \
{ \
asm ( \
"prfd PSTL1STRM, p5, [%[fetchptr], 0, mul vl] \n\t" \
"prfd PSTL1STRM, p5, [%[fetchptr], 4, mul vl] \n\t" \
"prfd PSTL1STRM, p5, [%[fetchptr], 8, mul vl] \n\t" \
: \
: [fetchptr] "r" (base) \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31","memory" \
); \
}
// ADD_RESULT_INTERNAL
#define ADD_RESULT_INTERNAL_A64FXf \
asm ( \
"fadd z0.s, p5/m, z0.s, z12.s \n\t" \
"fadd z1.s, p5/m, z1.s, z13.s \n\t" \
"fadd z2.s, p5/m, z2.s, z14.s \n\t" \
"fadd z3.s, p5/m, z3.s, z15.s \n\t" \
"fadd z4.s, p5/m, z4.s, z16.s \n\t" \
"fadd z5.s, p5/m, z5.s, z17.s \n\t" \
"fadd z6.s, p5/m, z6.s, z18.s \n\t" \
"fadd z7.s, p5/m, z7.s, z19.s \n\t" \
"fadd z8.s, p5/m, z8.s, z20.s \n\t" \
"fadd z9.s, p5/m, z9.s, z21.s \n\t" \
"fadd z10.s, p5/m, z10.s, z22.s \n\t" \
"fadd z11.s, p5/m, z11.s, z23.s \n\t" \
: \
: \
: "p5","cc","z0","z1","z2","z3","z4","z5","z6","z7","z8","z9","z10","z11","z12","z13","z14","z15","z16","z17","z18","z19","z20","z21","z22","z23","z24","z25","z26","z27","z28","z29","z30","z31" \
);

View File

@ -0,0 +1,601 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: Fujitsu_A64FX_intrin_double.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#define LOAD_CHIMU(base) LOAD_CHIMU_INTERLEAVED_A64FXd(base)
#define PREFETCH_CHIMU_L1(A) PREFETCH_CHIMU_L1_INTERNAL_A64FXd(A)
#define PREFETCH_GAUGE_L1(A) PREFETCH_GAUGE_L1_INTERNAL_A64FXd(A)
#define PREFETCH_CHIMU_L2(A) PREFETCH_CHIMU_L2_INTERNAL_A64FXd(A)
#define PREFETCH_GAUGE_L2(A) PREFETCH_GAUGE_L2_INTERNAL_A64FXd(A)
#define PF_GAUGE(A)
#define PREFETCH_RESULT_L2_STORE(A) PREFETCH_RESULT_L2_STORE_INTERNAL_A64FXd(A)
#define PREFETCH_RESULT_L1_STORE(A) PREFETCH_RESULT_L1_STORE_INTERNAL_A64FXd(A)
#define PREFETCH1_CHIMU(A) PREFETCH_CHIMU_L1(A)
#define PREFETCH_CHIMU(A) PREFETCH_CHIMU_L1(A)
#define LOCK_GAUGE(A)
#define UNLOCK_GAUGE(A)
#define MASK_REGS DECLARATIONS_A64FXd
#define SAVE_RESULT(A,B) RESULT_A64FXd(A); PREFETCH_RESULT_L2_STORE(B)
#define MULT_2SPIN_1(Dir) MULT_2SPIN_1_A64FXd(Dir)
#define MULT_2SPIN_2 MULT_2SPIN_2_A64FXd
#define LOAD_CHI(base) LOAD_CHI_A64FXd(base)
#define ADD_RESULT(base,basep) LOAD_CHIMU(base); ADD_RESULT_INTERNAL_A64FXd; RESULT_A64FXd(base)
#define XP_PROJ XP_PROJ_A64FXd
#define YP_PROJ YP_PROJ_A64FXd
#define ZP_PROJ ZP_PROJ_A64FXd
#define TP_PROJ TP_PROJ_A64FXd
#define XM_PROJ XM_PROJ_A64FXd
#define YM_PROJ YM_PROJ_A64FXd
#define ZM_PROJ ZM_PROJ_A64FXd
#define TM_PROJ TM_PROJ_A64FXd
#define XP_RECON XP_RECON_A64FXd
#define XM_RECON XM_RECON_A64FXd
#define XM_RECON_ACCUM XM_RECON_ACCUM_A64FXd
#define YM_RECON_ACCUM YM_RECON_ACCUM_A64FXd
#define ZM_RECON_ACCUM ZM_RECON_ACCUM_A64FXd
#define TM_RECON_ACCUM TM_RECON_ACCUM_A64FXd
#define XP_RECON_ACCUM XP_RECON_ACCUM_A64FXd
#define YP_RECON_ACCUM YP_RECON_ACCUM_A64FXd
#define ZP_RECON_ACCUM ZP_RECON_ACCUM_A64FXd
#define TP_RECON_ACCUM TP_RECON_ACCUM_A64FXd
#define PERMUTE_DIR0 0
#define PERMUTE_DIR1 1
#define PERMUTE_DIR2 2
#define PERMUTE_DIR3 3
#define PERMUTE PERMUTE_A64FXd;
#define LOAD_TABLE(Dir) if (Dir == 0) { LOAD_TABLE0; } else if (Dir == 1) { LOAD_TABLE1; } else if (Dir == 2) { LOAD_TABLE2; }
#define MAYBEPERM(Dir,perm) if (Dir != 3) { if (perm) { PERMUTE; } }
// DECLARATIONS
#define DECLARATIONS_A64FXd \
const uint64_t lut[4][8] = { \
{4, 5, 6, 7, 0, 1, 2, 3}, \
{2, 3, 0, 1, 6, 7, 4, 5}, \
{1, 0, 3, 2, 5, 4, 7, 6}, \
{0, 1, 2, 4, 5, 6, 7, 8} };\
svfloat64_t result_00; \
svfloat64_t result_01; \
svfloat64_t result_02; \
svfloat64_t result_10; \
svfloat64_t result_11; \
svfloat64_t result_12; \
svfloat64_t result_20; \
svfloat64_t result_21; \
svfloat64_t result_22; \
svfloat64_t result_30; \
svfloat64_t result_31; \
svfloat64_t result_32; \
svfloat64_t Chi_00; \
svfloat64_t Chi_01; \
svfloat64_t Chi_02; \
svfloat64_t Chi_10; \
svfloat64_t Chi_11; \
svfloat64_t Chi_12; \
svfloat64_t UChi_00; \
svfloat64_t UChi_01; \
svfloat64_t UChi_02; \
svfloat64_t UChi_10; \
svfloat64_t UChi_11; \
svfloat64_t UChi_12; \
svfloat64_t U_00; \
svfloat64_t U_10; \
svfloat64_t U_20; \
svfloat64_t U_01; \
svfloat64_t U_11; \
svfloat64_t U_21; \
svbool_t pg1; \
pg1 = svptrue_b64(); \
svuint64_t table0; \
svfloat64_t zero0; \
zero0 = svdup_f64(0.);
#define Chimu_00 Chi_00
#define Chimu_01 Chi_01
#define Chimu_02 Chi_02
#define Chimu_10 Chi_10
#define Chimu_11 Chi_11
#define Chimu_12 Chi_12
#define Chimu_20 UChi_00
#define Chimu_21 UChi_01
#define Chimu_22 UChi_02
#define Chimu_30 UChi_10
#define Chimu_31 UChi_11
#define Chimu_32 UChi_12
// RESULT
#define RESULT_A64FXd(base) \
{ \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + -6 * 64), result_00); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + -5 * 64), result_01); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + -4 * 64), result_02); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + -3 * 64), result_10); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + -2 * 64), result_11); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + -1 * 64), result_12); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + 0 * 64), result_20); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + 1 * 64), result_21); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + 2 * 64), result_22); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + 3 * 64), result_30); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + 4 * 64), result_31); \
svst1(pg1, (float64_t*)(base + 2 * 3 * 64 + 5 * 64), result_32); \
}
// PREFETCH_CHIMU_L2 (prefetch to L2)
#define PREFETCH_CHIMU_L2_INTERNAL_A64FXd(base) \
{ \
svprfd(pg1, (int64_t*)(base + 0), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(base + 256), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(base + 512), SV_PLDL2STRM); \
}
// PREFETCH_CHIMU_L1 (prefetch to L1)
#define PREFETCH_CHIMU_L1_INTERNAL_A64FXd(base) \
{ \
svprfd(pg1, (int64_t*)(base + 0), SV_PLDL1STRM); \
svprfd(pg1, (int64_t*)(base + 256), SV_PLDL1STRM); \
svprfd(pg1, (int64_t*)(base + 512), SV_PLDL1STRM); \
}
// PREFETCH_GAUGE_L2 (prefetch to L2)
#define PREFETCH_GAUGE_L2_INTERNAL_A64FXd(A) \
{ \
const auto & ref(U[sUn](A)); uint64_t baseU = (uint64_t)&ref + 3 * 3 * 64; \
svprfd(pg1, (int64_t*)(baseU + -256), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 0), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 256), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 512), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 768), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 1024), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 1280), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 1536), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 1792), SV_PLDL2STRM); \
}
// PREFETCH_GAUGE_L1 (prefetch to L1)
#define PREFETCH_GAUGE_L1_INTERNAL_A64FXd(A) \
{ \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
svprfd(pg1, (int64_t*)(baseU + 0), SV_PLDL1STRM); \
svprfd(pg1, (int64_t*)(baseU + 256), SV_PLDL1STRM); \
svprfd(pg1, (int64_t*)(baseU + 512), SV_PLDL1STRM); \
}
// LOAD_CHI
#define LOAD_CHI_A64FXd(base) \
{ \
Chi_00 = svld1(pg1, (float64_t*)(base + 0 * 64)); \
Chi_01 = svld1(pg1, (float64_t*)(base + 1 * 64)); \
Chi_02 = svld1(pg1, (float64_t*)(base + 2 * 64)); \
Chi_10 = svld1(pg1, (float64_t*)(base + 3 * 64)); \
Chi_11 = svld1(pg1, (float64_t*)(base + 4 * 64)); \
Chi_12 = svld1(pg1, (float64_t*)(base + 5 * 64)); \
}
// LOAD_CHIMU
#define LOAD_CHIMU_INTERLEAVED_A64FXd(base) \
{ \
Chimu_00 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -6 * 64)); \
Chimu_30 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 3 * 64)); \
Chimu_10 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -3 * 64)); \
Chimu_20 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 0 * 64)); \
Chimu_01 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -5 * 64)); \
Chimu_31 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 4 * 64)); \
Chimu_11 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -2 * 64)); \
Chimu_21 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 1 * 64)); \
Chimu_02 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -4 * 64)); \
Chimu_32 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 5 * 64)); \
Chimu_12 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -1 * 64)); \
Chimu_22 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 2 * 64)); \
}
// LOAD_CHIMU_0213
#define LOAD_CHIMU_0213_A64FXd \
{ \
const SiteSpinor & ref(in[offset]); \
Chimu_00 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -6 * 64)); \
Chimu_20 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 0 * 64)); \
Chimu_01 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -5 * 64)); \
Chimu_21 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 1 * 64)); \
Chimu_02 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -4 * 64)); \
Chimu_22 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 2 * 64)); \
Chimu_10 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -3 * 64)); \
Chimu_30 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 3 * 64)); \
Chimu_11 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -2 * 64)); \
Chimu_31 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 4 * 64)); \
Chimu_12 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -1 * 64)); \
Chimu_32 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 5 * 64)); \
}
// LOAD_CHIMU_0312
#define LOAD_CHIMU_0312_A64FXd \
{ \
const SiteSpinor & ref(in[offset]); \
Chimu_00 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -6 * 64)); \
Chimu_30 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 3 * 64)); \
Chimu_01 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -5 * 64)); \
Chimu_31 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 4 * 64)); \
Chimu_02 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -4 * 64)); \
Chimu_32 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 5 * 64)); \
Chimu_10 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -3 * 64)); \
Chimu_20 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 0 * 64)); \
Chimu_11 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -2 * 64)); \
Chimu_21 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 1 * 64)); \
Chimu_12 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + -1 * 64)); \
Chimu_22 = svld1(pg1, (float64_t*)(base + 2 * 3 * 64 + 2 * 64)); \
}
// LOAD_TABLE0
#define LOAD_TABLE0 \
table0 = svld1(pg1, (uint64_t*)&lut[0]);
// LOAD_TABLE1
#define LOAD_TABLE1 \
table0 = svld1(pg1, (uint64_t*)&lut[1]);
// LOAD_TABLE2
#define LOAD_TABLE2 \
table0 = svld1(pg1, (uint64_t*)&lut[2]);
// LOAD_TABLE3
#define LOAD_TABLE3 \
table0 = svld1(pg1, (uint64_t*)&lut[3]);
// PERMUTE
#define PERMUTE_A64FXd \
Chi_00 = svtbl(Chi_00, table0); \
Chi_01 = svtbl(Chi_01, table0); \
Chi_02 = svtbl(Chi_02, table0); \
Chi_10 = svtbl(Chi_10, table0); \
Chi_11 = svtbl(Chi_11, table0); \
Chi_12 = svtbl(Chi_12, table0);
// LOAD_GAUGE
#define LOAD_GAUGE \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
{ \
U_00 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -6 * 64)); \
U_10 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -3 * 64)); \
U_20 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + 0 * 64)); \
U_01 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -5 * 64)); \
U_11 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -2 * 64)); \
U_21 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + 1 * 64)); \
}
// MULT_2SPIN
#define MULT_2SPIN_1_A64FXd(A) \
{ \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
U_00 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -6 * 64)); \
U_10 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -3 * 64)); \
U_20 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + 0 * 64)); \
U_01 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -5 * 64)); \
U_11 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -2 * 64)); \
U_21 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + 1 * 64)); \
UChi_00 = svcmla_x(pg1, zero0, U_00, Chi_00, 0); \
UChi_10 = svcmla_x(pg1, zero0, U_00, Chi_10, 0); \
UChi_01 = svcmla_x(pg1, zero0, U_10, Chi_00, 0); \
UChi_11 = svcmla_x(pg1, zero0, U_10, Chi_10, 0); \
UChi_02 = svcmla_x(pg1, zero0, U_20, Chi_00, 0); \
UChi_12 = svcmla_x(pg1, zero0, U_20, Chi_10, 0); \
UChi_00 = svcmla_x(pg1, UChi_00, U_00, Chi_00, 90); \
UChi_10 = svcmla_x(pg1, UChi_10, U_00, Chi_10, 90); \
UChi_01 = svcmla_x(pg1, UChi_01, U_10, Chi_00, 90); \
UChi_11 = svcmla_x(pg1, UChi_11, U_10, Chi_10, 90); \
UChi_02 = svcmla_x(pg1, UChi_02, U_20, Chi_00, 90); \
UChi_12 = svcmla_x(pg1, UChi_12, U_20, Chi_10, 90); \
U_00 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -4 * 64)); \
U_10 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + -1 * 64)); \
U_20 = svld1(pg1, (float64_t*)(baseU + 2 * 3 * 64 + 2 * 64)); \
}
// MULT_2SPIN_BACKEND
#define MULT_2SPIN_2_A64FXd \
{ \
UChi_00 = svcmla_x(pg1, UChi_00, U_01, Chi_01, 0); \
UChi_10 = svcmla_x(pg1, UChi_10, U_01, Chi_11, 0); \
UChi_01 = svcmla_x(pg1, UChi_01, U_11, Chi_01, 0); \
UChi_11 = svcmla_x(pg1, UChi_11, U_11, Chi_11, 0); \
UChi_02 = svcmla_x(pg1, UChi_02, U_21, Chi_01, 0); \
UChi_12 = svcmla_x(pg1, UChi_12, U_21, Chi_11, 0); \
UChi_00 = svcmla_x(pg1, UChi_00, U_01, Chi_01, 90); \
UChi_10 = svcmla_x(pg1, UChi_10, U_01, Chi_11, 90); \
UChi_01 = svcmla_x(pg1, UChi_01, U_11, Chi_01, 90); \
UChi_11 = svcmla_x(pg1, UChi_11, U_11, Chi_11, 90); \
UChi_02 = svcmla_x(pg1, UChi_02, U_21, Chi_01, 90); \
UChi_12 = svcmla_x(pg1, UChi_12, U_21, Chi_11, 90); \
UChi_00 = svcmla_x(pg1, UChi_00, U_00, Chi_02, 0); \
UChi_10 = svcmla_x(pg1, UChi_10, U_00, Chi_12, 0); \
UChi_01 = svcmla_x(pg1, UChi_01, U_10, Chi_02, 0); \
UChi_11 = svcmla_x(pg1, UChi_11, U_10, Chi_12, 0); \
UChi_02 = svcmla_x(pg1, UChi_02, U_20, Chi_02, 0); \
UChi_12 = svcmla_x(pg1, UChi_12, U_20, Chi_12, 0); \
UChi_00 = svcmla_x(pg1, UChi_00, U_00, Chi_02, 90); \
UChi_10 = svcmla_x(pg1, UChi_10, U_00, Chi_12, 90); \
UChi_01 = svcmla_x(pg1, UChi_01, U_10, Chi_02, 90); \
UChi_11 = svcmla_x(pg1, UChi_11, U_10, Chi_12, 90); \
UChi_02 = svcmla_x(pg1, UChi_02, U_20, Chi_02, 90); \
UChi_12 = svcmla_x(pg1, UChi_12, U_20, Chi_12, 90); \
}
// XP_PROJ
#define XP_PROJ_A64FXd \
{ \
Chi_00 = svcadd_x(pg1, Chimu_00, Chimu_30, 90); \
Chi_01 = svcadd_x(pg1, Chimu_01, Chimu_31, 90); \
Chi_02 = svcadd_x(pg1, Chimu_02, Chimu_32, 90); \
Chi_10 = svcadd_x(pg1, Chimu_10, Chimu_20, 90); \
Chi_11 = svcadd_x(pg1, Chimu_11, Chimu_21, 90); \
Chi_12 = svcadd_x(pg1, Chimu_12, Chimu_22, 90); \
}
// XP_RECON
#define XP_RECON_A64FXd \
result_20 = svcadd_x(pg1, zero0, UChi_10, 270); \
result_21 = svcadd_x(pg1, zero0, UChi_11, 270); \
result_22 = svcadd_x(pg1, zero0, UChi_12, 270); \
result_30 = svcadd_x(pg1, zero0, UChi_00, 270); \
result_31 = svcadd_x(pg1, zero0, UChi_01, 270); \
result_32 = svcadd_x(pg1, zero0, UChi_02, 270); \
result_00 = UChi_00; \
result_01 = UChi_01; \
result_02 = UChi_02; \
result_10 = UChi_10; \
result_11 = UChi_11; \
result_12 = UChi_12;
// XP_RECON_ACCUM
#define XP_RECON_ACCUM_A64FXd \
result_30 = svcadd_x(pg1, result_30, UChi_00, 270); \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_31 = svcadd_x(pg1, result_31, UChi_01, 270); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_32 = svcadd_x(pg1, result_32, UChi_02, 270); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_20 = svcadd_x(pg1, result_20, UChi_10, 270); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_21 = svcadd_x(pg1, result_21, UChi_11, 270); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_22 = svcadd_x(pg1, result_22, UChi_12, 270); \
result_12 = svadd_x(pg1, result_12, UChi_12);
// YP_PROJ
#define YP_PROJ_A64FXd \
{ \
Chi_00 = svsub_x(pg1, Chimu_00, Chimu_30); \
Chi_01 = svsub_x(pg1, Chimu_01, Chimu_31); \
Chi_02 = svsub_x(pg1, Chimu_02, Chimu_32); \
Chi_10 = svadd_x(pg1, Chimu_10, Chimu_20); \
Chi_11 = svadd_x(pg1, Chimu_11, Chimu_21); \
Chi_12 = svadd_x(pg1, Chimu_12, Chimu_22); \
}
// ZP_PROJ
#define ZP_PROJ_A64FXd \
{ \
Chi_00 = svcadd_x(pg1, Chimu_00, Chimu_20, 90); \
Chi_01 = svcadd_x(pg1, Chimu_01, Chimu_21, 90); \
Chi_02 = svcadd_x(pg1, Chimu_02, Chimu_22, 90); \
Chi_10 = svcadd_x(pg1, Chimu_10, Chimu_30, 270); \
Chi_11 = svcadd_x(pg1, Chimu_11, Chimu_31, 270); \
Chi_12 = svcadd_x(pg1, Chimu_12, Chimu_32, 270); \
}
// TP_PROJ
#define TP_PROJ_A64FXd \
{ \
Chi_00 = svadd_x(pg1, Chimu_00, Chimu_20); \
Chi_01 = svadd_x(pg1, Chimu_01, Chimu_21); \
Chi_02 = svadd_x(pg1, Chimu_02, Chimu_22); \
Chi_10 = svadd_x(pg1, Chimu_10, Chimu_30); \
Chi_11 = svadd_x(pg1, Chimu_11, Chimu_31); \
Chi_12 = svadd_x(pg1, Chimu_12, Chimu_32); \
}
// XM_PROJ
#define XM_PROJ_A64FXd \
{ \
Chi_00 = svcadd_x(pg1, Chimu_00, Chimu_30, 270); \
Chi_01 = svcadd_x(pg1, Chimu_01, Chimu_31, 270); \
Chi_02 = svcadd_x(pg1, Chimu_02, Chimu_32, 270); \
Chi_10 = svcadd_x(pg1, Chimu_10, Chimu_20, 270); \
Chi_11 = svcadd_x(pg1, Chimu_11, Chimu_21, 270); \
Chi_12 = svcadd_x(pg1, Chimu_12, Chimu_22, 270); \
}
// XM_RECON
#define XM_RECON_A64FXd \
result_20 = svcadd_x(pg1, zero0, UChi_10, 90); \
result_21 = svcadd_x(pg1, zero0, UChi_11, 90); \
result_22 = svcadd_x(pg1, zero0, UChi_12, 90); \
result_30 = svcadd_x(pg1, zero0, UChi_00, 90); \
result_31 = svcadd_x(pg1, zero0, UChi_01, 90); \
result_32 = svcadd_x(pg1, zero0, UChi_02, 90); \
result_00 = UChi_00; \
result_01 = UChi_01; \
result_02 = UChi_02; \
result_10 = UChi_10; \
result_11 = UChi_11; \
result_12 = UChi_12;
// YM_PROJ
#define YM_PROJ_A64FXd \
{ \
Chi_00 = svadd_x(pg1, Chimu_00, Chimu_30); \
Chi_01 = svadd_x(pg1, Chimu_01, Chimu_31); \
Chi_02 = svadd_x(pg1, Chimu_02, Chimu_32); \
Chi_10 = svsub_x(pg1, Chimu_10, Chimu_20); \
Chi_11 = svsub_x(pg1, Chimu_11, Chimu_21); \
Chi_12 = svsub_x(pg1, Chimu_12, Chimu_22); \
}
// ZM_PROJ
#define ZM_PROJ_A64FXd \
{ \
Chi_00 = svcadd_x(pg1, Chimu_00, Chimu_20, 270); \
Chi_01 = svcadd_x(pg1, Chimu_01, Chimu_21, 270); \
Chi_02 = svcadd_x(pg1, Chimu_02, Chimu_22, 270); \
Chi_10 = svcadd_x(pg1, Chimu_10, Chimu_30, 90); \
Chi_11 = svcadd_x(pg1, Chimu_11, Chimu_31, 90); \
Chi_12 = svcadd_x(pg1, Chimu_12, Chimu_32, 90); \
}
// TM_PROJ
#define TM_PROJ_A64FXd \
{ \
Chi_00 = svsub_x(pg1, Chimu_00, Chimu_20); \
Chi_01 = svsub_x(pg1, Chimu_01, Chimu_21); \
Chi_02 = svsub_x(pg1, Chimu_02, Chimu_22); \
Chi_10 = svsub_x(pg1, Chimu_10, Chimu_30); \
Chi_11 = svsub_x(pg1, Chimu_11, Chimu_31); \
Chi_12 = svsub_x(pg1, Chimu_12, Chimu_32); \
}
// XM_RECON_ACCUM
#define XM_RECON_ACCUM_A64FXd \
result_30 = svcadd_x(pg1, result_30, UChi_00, 90); \
result_31 = svcadd_x(pg1, result_31, UChi_01, 90); \
result_32 = svcadd_x(pg1, result_32, UChi_02, 90); \
result_20 = svcadd_x(pg1, result_20, UChi_10, 90); \
result_21 = svcadd_x(pg1, result_21, UChi_11, 90); \
result_22 = svcadd_x(pg1, result_22, UChi_12, 90); \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12);
// YP_RECON_ACCUM
#define YP_RECON_ACCUM_A64FXd \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_30 = svsub_x(pg1, result_30, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_31 = svsub_x(pg1, result_31, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_32 = svsub_x(pg1, result_32, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_20 = svadd_x(pg1, result_20, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_21 = svadd_x(pg1, result_21, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12); \
result_22 = svadd_x(pg1, result_22, UChi_12);
// YM_RECON_ACCUM
#define YM_RECON_ACCUM_A64FXd \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_30 = svadd_x(pg1, result_30, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_31 = svadd_x(pg1, result_31, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_32 = svadd_x(pg1, result_32, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_20 = svsub_x(pg1, result_20, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_21 = svsub_x(pg1, result_21, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12); \
result_22 = svsub_x(pg1, result_22, UChi_12);
// ZP_RECON_ACCUM
#define ZP_RECON_ACCUM_A64FXd \
result_20 = svcadd_x(pg1, result_20, UChi_00, 270); \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_21 = svcadd_x(pg1, result_21, UChi_01, 270); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_22 = svcadd_x(pg1, result_22, UChi_02, 270); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_30 = svcadd_x(pg1, result_30, UChi_10, 90); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_31 = svcadd_x(pg1, result_31, UChi_11, 90); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_32 = svcadd_x(pg1, result_32, UChi_12, 90); \
result_12 = svadd_x(pg1, result_12, UChi_12);
// ZM_RECON_ACCUM
#define ZM_RECON_ACCUM_A64FXd \
result_20 = svcadd_x(pg1, result_20, UChi_00, 90); \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_21 = svcadd_x(pg1, result_21, UChi_01, 90); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_22 = svcadd_x(pg1, result_22, UChi_02, 90); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_30 = svcadd_x(pg1, result_30, UChi_10, 270); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_31 = svcadd_x(pg1, result_31, UChi_11, 270); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_32 = svcadd_x(pg1, result_32, UChi_12, 270); \
result_12 = svadd_x(pg1, result_12, UChi_12);
// TP_RECON_ACCUM
#define TP_RECON_ACCUM_A64FXd \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_20 = svadd_x(pg1, result_20, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_21 = svadd_x(pg1, result_21, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_22 = svadd_x(pg1, result_22, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_30 = svadd_x(pg1, result_30, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_31 = svadd_x(pg1, result_31, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12); \
result_32 = svadd_x(pg1, result_32, UChi_12);
// TM_RECON_ACCUM
#define TM_RECON_ACCUM_A64FXd \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_20 = svsub_x(pg1, result_20, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_21 = svsub_x(pg1, result_21, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_22 = svsub_x(pg1, result_22, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_30 = svsub_x(pg1, result_30, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_31 = svsub_x(pg1, result_31, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12); \
result_32 = svsub_x(pg1, result_32, UChi_12);
// ZERO_PSI
#define ZERO_PSI_A64FXd \
result_00 = svdup_f64(0.); \
result_01 = svdup_f64(0.); \
result_02 = svdup_f64(0.); \
result_10 = svdup_f64(0.); \
result_11 = svdup_f64(0.); \
result_12 = svdup_f64(0.); \
result_20 = svdup_f64(0.); \
result_21 = svdup_f64(0.); \
result_22 = svdup_f64(0.); \
result_30 = svdup_f64(0.); \
result_31 = svdup_f64(0.); \
result_32 = svdup_f64(0.);
// PREFETCH_RESULT_L2_STORE (prefetch store to L2)
#define PREFETCH_RESULT_L2_STORE_INTERNAL_A64FXd(base) \
{ \
svprfd(pg1, (int64_t*)(base + 0), SV_PSTL2STRM); \
svprfd(pg1, (int64_t*)(base + 256), SV_PSTL2STRM); \
svprfd(pg1, (int64_t*)(base + 512), SV_PSTL2STRM); \
}
// PREFETCH_RESULT_L1_STORE (prefetch store to L1)
#define PREFETCH_RESULT_L1_STORE_INTERNAL_A64FXd(base) \
{ \
svprfd(pg1, (int64_t*)(base + 0), SV_PSTL1STRM); \
svprfd(pg1, (int64_t*)(base + 256), SV_PSTL1STRM); \
svprfd(pg1, (int64_t*)(base + 512), SV_PSTL1STRM); \
}
// ADD_RESULT_INTERNAL
#define ADD_RESULT_INTERNAL_A64FXd \
result_00 = svadd_x(pg1, result_00, Chimu_00); \
result_01 = svadd_x(pg1, result_01, Chimu_01); \
result_02 = svadd_x(pg1, result_02, Chimu_02); \
result_10 = svadd_x(pg1, result_10, Chimu_10); \
result_11 = svadd_x(pg1, result_11, Chimu_11); \
result_12 = svadd_x(pg1, result_12, Chimu_12); \
result_20 = svadd_x(pg1, result_20, Chimu_20); \
result_21 = svadd_x(pg1, result_21, Chimu_21); \
result_22 = svadd_x(pg1, result_22, Chimu_22); \
result_30 = svadd_x(pg1, result_30, Chimu_30); \
result_31 = svadd_x(pg1, result_31, Chimu_31); \
result_32 = svadd_x(pg1, result_32, Chimu_32);

View File

@ -0,0 +1,601 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: Fujitsu_A64FX_intrin_single.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#define LOAD_CHIMU(base) LOAD_CHIMU_INTERLEAVED_A64FXf(base)
#define PREFETCH_CHIMU_L1(A) PREFETCH_CHIMU_L1_INTERNAL_A64FXf(A)
#define PREFETCH_GAUGE_L1(A) PREFETCH_GAUGE_L1_INTERNAL_A64FXf(A)
#define PREFETCH_CHIMU_L2(A) PREFETCH_CHIMU_L2_INTERNAL_A64FXf(A)
#define PREFETCH_GAUGE_L2(A) PREFETCH_GAUGE_L2_INTERNAL_A64FXf(A)
#define PF_GAUGE(A)
#define PREFETCH_RESULT_L2_STORE(A) PREFETCH_RESULT_L2_STORE_INTERNAL_A64FXf(A)
#define PREFETCH_RESULT_L1_STORE(A) PREFETCH_RESULT_L1_STORE_INTERNAL_A64FXf(A)
#define PREFETCH1_CHIMU(A) PREFETCH_CHIMU_L1(A)
#define PREFETCH_CHIMU(A) PREFETCH_CHIMU_L1(A)
#define LOCK_GAUGE(A)
#define UNLOCK_GAUGE(A)
#define MASK_REGS DECLARATIONS_A64FXf
#define SAVE_RESULT(A,B) RESULT_A64FXf(A); PREFETCH_RESULT_L2_STORE(B)
#define MULT_2SPIN_1(Dir) MULT_2SPIN_1_A64FXf(Dir)
#define MULT_2SPIN_2 MULT_2SPIN_2_A64FXf
#define LOAD_CHI(base) LOAD_CHI_A64FXf(base)
#define ADD_RESULT(base,basep) LOAD_CHIMU(base); ADD_RESULT_INTERNAL_A64FXf; RESULT_A64FXf(base)
#define XP_PROJ XP_PROJ_A64FXf
#define YP_PROJ YP_PROJ_A64FXf
#define ZP_PROJ ZP_PROJ_A64FXf
#define TP_PROJ TP_PROJ_A64FXf
#define XM_PROJ XM_PROJ_A64FXf
#define YM_PROJ YM_PROJ_A64FXf
#define ZM_PROJ ZM_PROJ_A64FXf
#define TM_PROJ TM_PROJ_A64FXf
#define XP_RECON XP_RECON_A64FXf
#define XM_RECON XM_RECON_A64FXf
#define XM_RECON_ACCUM XM_RECON_ACCUM_A64FXf
#define YM_RECON_ACCUM YM_RECON_ACCUM_A64FXf
#define ZM_RECON_ACCUM ZM_RECON_ACCUM_A64FXf
#define TM_RECON_ACCUM TM_RECON_ACCUM_A64FXf
#define XP_RECON_ACCUM XP_RECON_ACCUM_A64FXf
#define YP_RECON_ACCUM YP_RECON_ACCUM_A64FXf
#define ZP_RECON_ACCUM ZP_RECON_ACCUM_A64FXf
#define TP_RECON_ACCUM TP_RECON_ACCUM_A64FXf
#define PERMUTE_DIR0 0
#define PERMUTE_DIR1 1
#define PERMUTE_DIR2 2
#define PERMUTE_DIR3 3
#define PERMUTE PERMUTE_A64FXf;
#define LOAD_TABLE(Dir) if (Dir == 0) { LOAD_TABLE0; } else if (Dir == 1) { LOAD_TABLE1 } else if (Dir == 2) { LOAD_TABLE2; } else if (Dir == 3) { LOAD_TABLE3; }
#define MAYBEPERM(A,perm) if (perm) { PERMUTE; }
// DECLARATIONS
#define DECLARATIONS_A64FXf \
const uint32_t lut[4][16] = { \
{8, 9, 10, 11, 12, 13, 14, 15, 0, 1, 2, 3, 4, 5, 6, 7}, \
{4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11}, \
{2, 3, 0, 1, 6, 7, 4, 5, 10, 11, 8, 9, 14, 15, 12, 13}, \
{1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14} }; \
svfloat32_t result_00; \
svfloat32_t result_01; \
svfloat32_t result_02; \
svfloat32_t result_10; \
svfloat32_t result_11; \
svfloat32_t result_12; \
svfloat32_t result_20; \
svfloat32_t result_21; \
svfloat32_t result_22; \
svfloat32_t result_30; \
svfloat32_t result_31; \
svfloat32_t result_32; \
svfloat32_t Chi_00; \
svfloat32_t Chi_01; \
svfloat32_t Chi_02; \
svfloat32_t Chi_10; \
svfloat32_t Chi_11; \
svfloat32_t Chi_12; \
svfloat32_t UChi_00; \
svfloat32_t UChi_01; \
svfloat32_t UChi_02; \
svfloat32_t UChi_10; \
svfloat32_t UChi_11; \
svfloat32_t UChi_12; \
svfloat32_t U_00; \
svfloat32_t U_10; \
svfloat32_t U_20; \
svfloat32_t U_01; \
svfloat32_t U_11; \
svfloat32_t U_21; \
svbool_t pg1; \
pg1 = svptrue_b32(); \
svuint32_t table0; \
svfloat32_t zero0; \
zero0 = svdup_f32(0.);
#define Chimu_00 Chi_00
#define Chimu_01 Chi_01
#define Chimu_02 Chi_02
#define Chimu_10 Chi_10
#define Chimu_11 Chi_11
#define Chimu_12 Chi_12
#define Chimu_20 UChi_00
#define Chimu_21 UChi_01
#define Chimu_22 UChi_02
#define Chimu_30 UChi_10
#define Chimu_31 UChi_11
#define Chimu_32 UChi_12
// RESULT
#define RESULT_A64FXf(base) \
{ \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + -6 * 64), result_00); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + -5 * 64), result_01); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + -4 * 64), result_02); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + -3 * 64), result_10); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + -2 * 64), result_11); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + -1 * 64), result_12); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + 0 * 64), result_20); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + 1 * 64), result_21); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + 2 * 64), result_22); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + 3 * 64), result_30); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + 4 * 64), result_31); \
svst1(pg1, (float32_t*)(base + 2 * 3 * 64 + 5 * 64), result_32); \
}
// PREFETCH_CHIMU_L2 (prefetch to L2)
#define PREFETCH_CHIMU_L2_INTERNAL_A64FXf(base) \
{ \
svprfd(pg1, (int64_t*)(base + 0), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(base + 256), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(base + 512), SV_PLDL2STRM); \
}
// PREFETCH_CHIMU_L1 (prefetch to L1)
#define PREFETCH_CHIMU_L1_INTERNAL_A64FXf(base) \
{ \
svprfd(pg1, (int64_t*)(base + 0), SV_PLDL1STRM); \
svprfd(pg1, (int64_t*)(base + 256), SV_PLDL1STRM); \
svprfd(pg1, (int64_t*)(base + 512), SV_PLDL1STRM); \
}
// PREFETCH_GAUGE_L2 (prefetch to L2)
#define PREFETCH_GAUGE_L2_INTERNAL_A64FXf(A) \
{ \
const auto & ref(U[sUn](A)); uint64_t baseU = (uint64_t)&ref + 3 * 3 * 64; \
svprfd(pg1, (int64_t*)(baseU + -256), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 0), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 256), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 512), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 768), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 1024), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 1280), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 1536), SV_PLDL2STRM); \
svprfd(pg1, (int64_t*)(baseU + 1792), SV_PLDL2STRM); \
}
// PREFETCH_GAUGE_L1 (prefetch to L1)
#define PREFETCH_GAUGE_L1_INTERNAL_A64FXf(A) \
{ \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
svprfd(pg1, (int64_t*)(baseU + 0), SV_PLDL1STRM); \
svprfd(pg1, (int64_t*)(baseU + 256), SV_PLDL1STRM); \
svprfd(pg1, (int64_t*)(baseU + 512), SV_PLDL1STRM); \
}
// LOAD_CHI
#define LOAD_CHI_A64FXf(base) \
{ \
Chi_00 = svld1(pg1, (float32_t*)(base + 0 * 64)); \
Chi_01 = svld1(pg1, (float32_t*)(base + 1 * 64)); \
Chi_02 = svld1(pg1, (float32_t*)(base + 2 * 64)); \
Chi_10 = svld1(pg1, (float32_t*)(base + 3 * 64)); \
Chi_11 = svld1(pg1, (float32_t*)(base + 4 * 64)); \
Chi_12 = svld1(pg1, (float32_t*)(base + 5 * 64)); \
}
// LOAD_CHIMU
#define LOAD_CHIMU_INTERLEAVED_A64FXf(base) \
{ \
Chimu_00 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -6 * 64)); \
Chimu_30 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 3 * 64)); \
Chimu_10 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -3 * 64)); \
Chimu_20 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 0 * 64)); \
Chimu_01 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -5 * 64)); \
Chimu_31 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 4 * 64)); \
Chimu_11 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -2 * 64)); \
Chimu_21 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 1 * 64)); \
Chimu_02 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -4 * 64)); \
Chimu_32 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 5 * 64)); \
Chimu_12 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -1 * 64)); \
Chimu_22 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 2 * 64)); \
}
// LOAD_CHIMU_0213
#define LOAD_CHIMU_0213_A64FXf \
{ \
const SiteSpinor & ref(in[offset]); \
Chimu_00 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -6 * 64)); \
Chimu_20 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 0 * 64)); \
Chimu_01 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -5 * 64)); \
Chimu_21 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 1 * 64)); \
Chimu_02 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -4 * 64)); \
Chimu_22 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 2 * 64)); \
Chimu_10 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -3 * 64)); \
Chimu_30 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 3 * 64)); \
Chimu_11 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -2 * 64)); \
Chimu_31 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 4 * 64)); \
Chimu_12 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -1 * 64)); \
Chimu_32 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 5 * 64)); \
}
// LOAD_CHIMU_0312
#define LOAD_CHIMU_0312_A64FXf \
{ \
const SiteSpinor & ref(in[offset]); \
Chimu_00 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -6 * 64)); \
Chimu_30 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 3 * 64)); \
Chimu_01 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -5 * 64)); \
Chimu_31 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 4 * 64)); \
Chimu_02 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -4 * 64)); \
Chimu_32 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 5 * 64)); \
Chimu_10 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -3 * 64)); \
Chimu_20 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 0 * 64)); \
Chimu_11 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -2 * 64)); \
Chimu_21 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 1 * 64)); \
Chimu_12 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + -1 * 64)); \
Chimu_22 = svld1(pg1, (float32_t*)(base + 2 * 3 * 64 + 2 * 64)); \
}
// LOAD_TABLE0
#define LOAD_TABLE0 \
table0 = svld1(pg1, (uint32_t*)&lut[0]);
// LOAD_TABLE1
#define LOAD_TABLE1 \
table0 = svld1(pg1, (uint32_t*)&lut[1]);
// LOAD_TABLE2
#define LOAD_TABLE2 \
table0 = svld1(pg1, (uint32_t*)&lut[2]);
// LOAD_TABLE3
#define LOAD_TABLE3 \
table0 = svld1(pg1, (uint32_t*)&lut[3]);
// PERMUTE
#define PERMUTE_A64FXf \
Chi_00 = svtbl(Chi_00, table0); \
Chi_01 = svtbl(Chi_01, table0); \
Chi_02 = svtbl(Chi_02, table0); \
Chi_10 = svtbl(Chi_10, table0); \
Chi_11 = svtbl(Chi_11, table0); \
Chi_12 = svtbl(Chi_12, table0);
// LOAD_GAUGE
#define LOAD_GAUGE \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
{ \
U_00 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -6 * 64)); \
U_10 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -3 * 64)); \
U_20 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + 0 * 64)); \
U_01 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -5 * 64)); \
U_11 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -2 * 64)); \
U_21 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + 1 * 64)); \
}
// MULT_2SPIN
#define MULT_2SPIN_1_A64FXf(A) \
{ \
const auto & ref(U[sU](A)); uint64_t baseU = (uint64_t)&ref; \
U_00 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -6 * 64)); \
U_10 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -3 * 64)); \
U_20 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + 0 * 64)); \
U_01 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -5 * 64)); \
U_11 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -2 * 64)); \
U_21 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + 1 * 64)); \
UChi_00 = svcmla_x(pg1, zero0, U_00, Chi_00, 0); \
UChi_10 = svcmla_x(pg1, zero0, U_00, Chi_10, 0); \
UChi_01 = svcmla_x(pg1, zero0, U_10, Chi_00, 0); \
UChi_11 = svcmla_x(pg1, zero0, U_10, Chi_10, 0); \
UChi_02 = svcmla_x(pg1, zero0, U_20, Chi_00, 0); \
UChi_12 = svcmla_x(pg1, zero0, U_20, Chi_10, 0); \
UChi_00 = svcmla_x(pg1, UChi_00, U_00, Chi_00, 90); \
UChi_10 = svcmla_x(pg1, UChi_10, U_00, Chi_10, 90); \
UChi_01 = svcmla_x(pg1, UChi_01, U_10, Chi_00, 90); \
UChi_11 = svcmla_x(pg1, UChi_11, U_10, Chi_10, 90); \
UChi_02 = svcmla_x(pg1, UChi_02, U_20, Chi_00, 90); \
UChi_12 = svcmla_x(pg1, UChi_12, U_20, Chi_10, 90); \
U_00 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -4 * 64)); \
U_10 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + -1 * 64)); \
U_20 = svld1(pg1, (float32_t*)(baseU + 2 * 3 * 64 + 2 * 64)); \
}
// MULT_2SPIN_BACKEND
#define MULT_2SPIN_2_A64FXf \
{ \
UChi_00 = svcmla_x(pg1, UChi_00, U_01, Chi_01, 0); \
UChi_10 = svcmla_x(pg1, UChi_10, U_01, Chi_11, 0); \
UChi_01 = svcmla_x(pg1, UChi_01, U_11, Chi_01, 0); \
UChi_11 = svcmla_x(pg1, UChi_11, U_11, Chi_11, 0); \
UChi_02 = svcmla_x(pg1, UChi_02, U_21, Chi_01, 0); \
UChi_12 = svcmla_x(pg1, UChi_12, U_21, Chi_11, 0); \
UChi_00 = svcmla_x(pg1, UChi_00, U_01, Chi_01, 90); \
UChi_10 = svcmla_x(pg1, UChi_10, U_01, Chi_11, 90); \
UChi_01 = svcmla_x(pg1, UChi_01, U_11, Chi_01, 90); \
UChi_11 = svcmla_x(pg1, UChi_11, U_11, Chi_11, 90); \
UChi_02 = svcmla_x(pg1, UChi_02, U_21, Chi_01, 90); \
UChi_12 = svcmla_x(pg1, UChi_12, U_21, Chi_11, 90); \
UChi_00 = svcmla_x(pg1, UChi_00, U_00, Chi_02, 0); \
UChi_10 = svcmla_x(pg1, UChi_10, U_00, Chi_12, 0); \
UChi_01 = svcmla_x(pg1, UChi_01, U_10, Chi_02, 0); \
UChi_11 = svcmla_x(pg1, UChi_11, U_10, Chi_12, 0); \
UChi_02 = svcmla_x(pg1, UChi_02, U_20, Chi_02, 0); \
UChi_12 = svcmla_x(pg1, UChi_12, U_20, Chi_12, 0); \
UChi_00 = svcmla_x(pg1, UChi_00, U_00, Chi_02, 90); \
UChi_10 = svcmla_x(pg1, UChi_10, U_00, Chi_12, 90); \
UChi_01 = svcmla_x(pg1, UChi_01, U_10, Chi_02, 90); \
UChi_11 = svcmla_x(pg1, UChi_11, U_10, Chi_12, 90); \
UChi_02 = svcmla_x(pg1, UChi_02, U_20, Chi_02, 90); \
UChi_12 = svcmla_x(pg1, UChi_12, U_20, Chi_12, 90); \
}
// XP_PROJ
#define XP_PROJ_A64FXf \
{ \
Chi_00 = svcadd_x(pg1, Chimu_00, Chimu_30, 90); \
Chi_01 = svcadd_x(pg1, Chimu_01, Chimu_31, 90); \
Chi_02 = svcadd_x(pg1, Chimu_02, Chimu_32, 90); \
Chi_10 = svcadd_x(pg1, Chimu_10, Chimu_20, 90); \
Chi_11 = svcadd_x(pg1, Chimu_11, Chimu_21, 90); \
Chi_12 = svcadd_x(pg1, Chimu_12, Chimu_22, 90); \
}
// XP_RECON
#define XP_RECON_A64FXf \
result_20 = svcadd_x(pg1, zero0, UChi_10, 270); \
result_21 = svcadd_x(pg1, zero0, UChi_11, 270); \
result_22 = svcadd_x(pg1, zero0, UChi_12, 270); \
result_30 = svcadd_x(pg1, zero0, UChi_00, 270); \
result_31 = svcadd_x(pg1, zero0, UChi_01, 270); \
result_32 = svcadd_x(pg1, zero0, UChi_02, 270); \
result_00 = UChi_00; \
result_01 = UChi_01; \
result_02 = UChi_02; \
result_10 = UChi_10; \
result_11 = UChi_11; \
result_12 = UChi_12;
// XP_RECON_ACCUM
#define XP_RECON_ACCUM_A64FXf \
result_30 = svcadd_x(pg1, result_30, UChi_00, 270); \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_31 = svcadd_x(pg1, result_31, UChi_01, 270); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_32 = svcadd_x(pg1, result_32, UChi_02, 270); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_20 = svcadd_x(pg1, result_20, UChi_10, 270); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_21 = svcadd_x(pg1, result_21, UChi_11, 270); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_22 = svcadd_x(pg1, result_22, UChi_12, 270); \
result_12 = svadd_x(pg1, result_12, UChi_12);
// YP_PROJ
#define YP_PROJ_A64FXf \
{ \
Chi_00 = svsub_x(pg1, Chimu_00, Chimu_30); \
Chi_01 = svsub_x(pg1, Chimu_01, Chimu_31); \
Chi_02 = svsub_x(pg1, Chimu_02, Chimu_32); \
Chi_10 = svadd_x(pg1, Chimu_10, Chimu_20); \
Chi_11 = svadd_x(pg1, Chimu_11, Chimu_21); \
Chi_12 = svadd_x(pg1, Chimu_12, Chimu_22); \
}
// ZP_PROJ
#define ZP_PROJ_A64FXf \
{ \
Chi_00 = svcadd_x(pg1, Chimu_00, Chimu_20, 90); \
Chi_01 = svcadd_x(pg1, Chimu_01, Chimu_21, 90); \
Chi_02 = svcadd_x(pg1, Chimu_02, Chimu_22, 90); \
Chi_10 = svcadd_x(pg1, Chimu_10, Chimu_30, 270); \
Chi_11 = svcadd_x(pg1, Chimu_11, Chimu_31, 270); \
Chi_12 = svcadd_x(pg1, Chimu_12, Chimu_32, 270); \
}
// TP_PROJ
#define TP_PROJ_A64FXf \
{ \
Chi_00 = svadd_x(pg1, Chimu_00, Chimu_20); \
Chi_01 = svadd_x(pg1, Chimu_01, Chimu_21); \
Chi_02 = svadd_x(pg1, Chimu_02, Chimu_22); \
Chi_10 = svadd_x(pg1, Chimu_10, Chimu_30); \
Chi_11 = svadd_x(pg1, Chimu_11, Chimu_31); \
Chi_12 = svadd_x(pg1, Chimu_12, Chimu_32); \
}
// XM_PROJ
#define XM_PROJ_A64FXf \
{ \
Chi_00 = svcadd_x(pg1, Chimu_00, Chimu_30, 270); \
Chi_01 = svcadd_x(pg1, Chimu_01, Chimu_31, 270); \
Chi_02 = svcadd_x(pg1, Chimu_02, Chimu_32, 270); \
Chi_10 = svcadd_x(pg1, Chimu_10, Chimu_20, 270); \
Chi_11 = svcadd_x(pg1, Chimu_11, Chimu_21, 270); \
Chi_12 = svcadd_x(pg1, Chimu_12, Chimu_22, 270); \
}
// XM_RECON
#define XM_RECON_A64FXf \
result_20 = svcadd_x(pg1, zero0, UChi_10, 90); \
result_21 = svcadd_x(pg1, zero0, UChi_11, 90); \
result_22 = svcadd_x(pg1, zero0, UChi_12, 90); \
result_30 = svcadd_x(pg1, zero0, UChi_00, 90); \
result_31 = svcadd_x(pg1, zero0, UChi_01, 90); \
result_32 = svcadd_x(pg1, zero0, UChi_02, 90); \
result_00 = UChi_00; \
result_01 = UChi_01; \
result_02 = UChi_02; \
result_10 = UChi_10; \
result_11 = UChi_11; \
result_12 = UChi_12;
// YM_PROJ
#define YM_PROJ_A64FXf \
{ \
Chi_00 = svadd_x(pg1, Chimu_00, Chimu_30); \
Chi_01 = svadd_x(pg1, Chimu_01, Chimu_31); \
Chi_02 = svadd_x(pg1, Chimu_02, Chimu_32); \
Chi_10 = svsub_x(pg1, Chimu_10, Chimu_20); \
Chi_11 = svsub_x(pg1, Chimu_11, Chimu_21); \
Chi_12 = svsub_x(pg1, Chimu_12, Chimu_22); \
}
// ZM_PROJ
#define ZM_PROJ_A64FXf \
{ \
Chi_00 = svcadd_x(pg1, Chimu_00, Chimu_20, 270); \
Chi_01 = svcadd_x(pg1, Chimu_01, Chimu_21, 270); \
Chi_02 = svcadd_x(pg1, Chimu_02, Chimu_22, 270); \
Chi_10 = svcadd_x(pg1, Chimu_10, Chimu_30, 90); \
Chi_11 = svcadd_x(pg1, Chimu_11, Chimu_31, 90); \
Chi_12 = svcadd_x(pg1, Chimu_12, Chimu_32, 90); \
}
// TM_PROJ
#define TM_PROJ_A64FXf \
{ \
Chi_00 = svsub_x(pg1, Chimu_00, Chimu_20); \
Chi_01 = svsub_x(pg1, Chimu_01, Chimu_21); \
Chi_02 = svsub_x(pg1, Chimu_02, Chimu_22); \
Chi_10 = svsub_x(pg1, Chimu_10, Chimu_30); \
Chi_11 = svsub_x(pg1, Chimu_11, Chimu_31); \
Chi_12 = svsub_x(pg1, Chimu_12, Chimu_32); \
}
// XM_RECON_ACCUM
#define XM_RECON_ACCUM_A64FXf \
result_30 = svcadd_x(pg1, result_30, UChi_00, 90); \
result_31 = svcadd_x(pg1, result_31, UChi_01, 90); \
result_32 = svcadd_x(pg1, result_32, UChi_02, 90); \
result_20 = svcadd_x(pg1, result_20, UChi_10, 90); \
result_21 = svcadd_x(pg1, result_21, UChi_11, 90); \
result_22 = svcadd_x(pg1, result_22, UChi_12, 90); \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12);
// YP_RECON_ACCUM
#define YP_RECON_ACCUM_A64FXf \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_30 = svsub_x(pg1, result_30, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_31 = svsub_x(pg1, result_31, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_32 = svsub_x(pg1, result_32, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_20 = svadd_x(pg1, result_20, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_21 = svadd_x(pg1, result_21, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12); \
result_22 = svadd_x(pg1, result_22, UChi_12);
// YM_RECON_ACCUM
#define YM_RECON_ACCUM_A64FXf \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_30 = svadd_x(pg1, result_30, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_31 = svadd_x(pg1, result_31, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_32 = svadd_x(pg1, result_32, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_20 = svsub_x(pg1, result_20, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_21 = svsub_x(pg1, result_21, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12); \
result_22 = svsub_x(pg1, result_22, UChi_12);
// ZP_RECON_ACCUM
#define ZP_RECON_ACCUM_A64FXf \
result_20 = svcadd_x(pg1, result_20, UChi_00, 270); \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_21 = svcadd_x(pg1, result_21, UChi_01, 270); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_22 = svcadd_x(pg1, result_22, UChi_02, 270); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_30 = svcadd_x(pg1, result_30, UChi_10, 90); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_31 = svcadd_x(pg1, result_31, UChi_11, 90); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_32 = svcadd_x(pg1, result_32, UChi_12, 90); \
result_12 = svadd_x(pg1, result_12, UChi_12);
// ZM_RECON_ACCUM
#define ZM_RECON_ACCUM_A64FXf \
result_20 = svcadd_x(pg1, result_20, UChi_00, 90); \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_21 = svcadd_x(pg1, result_21, UChi_01, 90); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_22 = svcadd_x(pg1, result_22, UChi_02, 90); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_30 = svcadd_x(pg1, result_30, UChi_10, 270); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_31 = svcadd_x(pg1, result_31, UChi_11, 270); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_32 = svcadd_x(pg1, result_32, UChi_12, 270); \
result_12 = svadd_x(pg1, result_12, UChi_12);
// TP_RECON_ACCUM
#define TP_RECON_ACCUM_A64FXf \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_20 = svadd_x(pg1, result_20, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_21 = svadd_x(pg1, result_21, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_22 = svadd_x(pg1, result_22, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_30 = svadd_x(pg1, result_30, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_31 = svadd_x(pg1, result_31, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12); \
result_32 = svadd_x(pg1, result_32, UChi_12);
// TM_RECON_ACCUM
#define TM_RECON_ACCUM_A64FXf \
result_00 = svadd_x(pg1, result_00, UChi_00); \
result_20 = svsub_x(pg1, result_20, UChi_00); \
result_01 = svadd_x(pg1, result_01, UChi_01); \
result_21 = svsub_x(pg1, result_21, UChi_01); \
result_02 = svadd_x(pg1, result_02, UChi_02); \
result_22 = svsub_x(pg1, result_22, UChi_02); \
result_10 = svadd_x(pg1, result_10, UChi_10); \
result_30 = svsub_x(pg1, result_30, UChi_10); \
result_11 = svadd_x(pg1, result_11, UChi_11); \
result_31 = svsub_x(pg1, result_31, UChi_11); \
result_12 = svadd_x(pg1, result_12, UChi_12); \
result_32 = svsub_x(pg1, result_32, UChi_12);
// ZERO_PSI
#define ZERO_PSI_A64FXf \
result_00 = svdup_f32(0.); \
result_01 = svdup_f32(0.); \
result_02 = svdup_f32(0.); \
result_10 = svdup_f32(0.); \
result_11 = svdup_f32(0.); \
result_12 = svdup_f32(0.); \
result_20 = svdup_f32(0.); \
result_21 = svdup_f32(0.); \
result_22 = svdup_f32(0.); \
result_30 = svdup_f32(0.); \
result_31 = svdup_f32(0.); \
result_32 = svdup_f32(0.);
// PREFETCH_RESULT_L2_STORE (prefetch store to L2)
#define PREFETCH_RESULT_L2_STORE_INTERNAL_A64FXf(base) \
{ \
svprfd(pg1, (int64_t*)(base + 0), SV_PSTL2STRM); \
svprfd(pg1, (int64_t*)(base + 256), SV_PSTL2STRM); \
svprfd(pg1, (int64_t*)(base + 512), SV_PSTL2STRM); \
}
// PREFETCH_RESULT_L1_STORE (prefetch store to L1)
#define PREFETCH_RESULT_L1_STORE_INTERNAL_A64FXf(base) \
{ \
svprfd(pg1, (int64_t*)(base + 0), SV_PSTL1STRM); \
svprfd(pg1, (int64_t*)(base + 256), SV_PSTL1STRM); \
svprfd(pg1, (int64_t*)(base + 512), SV_PSTL1STRM); \
}
// ADD_RESULT_INTERNAL
#define ADD_RESULT_INTERNAL_A64FXf \
result_00 = svadd_x(pg1, result_00, Chimu_00); \
result_01 = svadd_x(pg1, result_01, Chimu_01); \
result_02 = svadd_x(pg1, result_02, Chimu_02); \
result_10 = svadd_x(pg1, result_10, Chimu_10); \
result_11 = svadd_x(pg1, result_11, Chimu_11); \
result_12 = svadd_x(pg1, result_12, Chimu_12); \
result_20 = svadd_x(pg1, result_20, Chimu_20); \
result_21 = svadd_x(pg1, result_21, Chimu_21); \
result_22 = svadd_x(pg1, result_22, Chimu_22); \
result_30 = svadd_x(pg1, result_30, Chimu_30); \
result_31 = svadd_x(pg1, result_31, Chimu_31); \
result_32 = svadd_x(pg1, result_32, Chimu_32);

View File

@ -0,0 +1,76 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: Fujitsu_A64FX_undef.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
#undef LOAD_CHIMU
#undef PREFETCH_CHIMU_L1
#undef PREFETCH_GAUGE_L1
#undef PREFETCH_CHIMU_L2
#undef PREFETCH_GAUGE_L2
#undef PREFETCH_GAUGE_L1_INTERNAL
#undef PREFETCH1_CHIMU
#undef PREFETCH_CHIMU
#undef PREFETCH_RESULT_L2_STORE
#undef PREFETCH_RESULT_L1_STORE
#undef LOAD_GAUGE
#undef LOCK_GAUGE
#undef UNLOCK_GAUGE
#undef MASK_REGS
#undef SAVE_RESULT
#undef ADD_RESULT
#undef MULT_2SPIN_1
#undef MULT_2SPIN_2
#undef MAYBEPERM
#undef LOAD_CHI
#undef XP_PROJ
#undef YP_PROJ
#undef ZP_PROJ
#undef TP_PROJ
#undef XM_PROJ
#undef YM_PROJ
#undef ZM_PROJ
#undef TM_PROJ
#undef XP_RECON
#undef XM_RECON
#undef XM_RECON_ACCUM
#undef YM_RECON_ACCUM
#undef ZM_RECON_ACCUM
#undef TM_RECON_ACCUM
#undef XP_RECON_ACCUM
#undef YP_RECON_ACCUM
#undef ZP_RECON_ACCUM
#undef TP_RECON_ACCUM
#undef PERMUTE
#undef PERMUTE_DIR0
#undef PERMUTE_DIR1
#undef PERMUTE_DIR2
#undef PERMUTE_DIR3
#undef LOAD_TABLE
#undef LOAD_TABLE0
#undef LOAD_TABLE1
#undef LOAD_TABLE2
#undef LOAD_TABLE3

942
Grid/simd/Grid_a64fx-2.h Normal file
View File

@ -0,0 +1,942 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: Grid_a64fx-2.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de>
with support from Arm
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
/////////////////////////////////////////////////////
// Using SVE ACLE
/////////////////////////////////////////////////////
static_assert(GEN_SIMD_WIDTH % 64u == 0, "A64FX SIMD vector size is 64 bytes");
NAMESPACE_BEGIN(Grid);
NAMESPACE_BEGIN(Optimization);
// type traits giving the number of elements for each vector type
template <typename T> struct W;
template <> struct W<double> {
constexpr static unsigned int c = GEN_SIMD_WIDTH/16u;
constexpr static unsigned int r = GEN_SIMD_WIDTH/8u;
};
template <> struct W<float> {
constexpr static unsigned int c = GEN_SIMD_WIDTH/8u;
constexpr static unsigned int r = GEN_SIMD_WIDTH/4u;
};
template <> struct W<Integer> {
constexpr static unsigned int r = GEN_SIMD_WIDTH/4u;
};
template <> struct W<uint16_t> {
constexpr static unsigned int c = GEN_SIMD_WIDTH/4u;
constexpr static unsigned int r = GEN_SIMD_WIDTH/2u;
};
template <> struct W<uint64_t> {
constexpr static unsigned int c = GEN_SIMD_WIDTH/16u;
constexpr static unsigned int r = GEN_SIMD_WIDTH/8u;
};
#ifdef ARMCLANGCOMPAT
// SIMD vector immediate types
template <typename T>
struct vec_imm {
alignas(GEN_SIMD_WIDTH) T v[W<T>::r];
};
// SIMD vector types
template <typename T>
struct vec {
alignas(GEN_SIMD_WIDTH) T v[W<T>::r];
vec() = default;
vec(const vec &rhs) { this->operator=(rhs); }
vec(const vec_imm<T> &rhs) {
// v = rhs.v
svst1(svptrue_b8(), (T*)this, svld1(svptrue_b8(), (T*)rhs.v));
}
inline vec &operator=(const vec &rhs) {
// v = rhs.v
svst1(svptrue_b8(), (T*)this, svld1(svptrue_b8(), (T*)rhs.v));
return *this;
};
};
#else // no ARMCLANGCOMPAT
#define vec_imm vec
// SIMD vector types
template <typename T>
struct vec {
alignas(GEN_SIMD_WIDTH) T v[W<T>::r];
};
#endif
typedef vec<float> vecf;
typedef vec<double> vecd;
typedef vec<uint16_t> vech; // half precision comms
typedef vec<Integer> veci;
NAMESPACE_END(Optimization)
NAMESPACE_END(Grid)
// low-level API
NAMESPACE_BEGIN(Grid);
NAMESPACE_BEGIN(Optimization);
template <typename T>
struct acle{};
template <>
struct acle<double>{
typedef svfloat64_t vt;
typedef svfloat64x2_t vt2;
typedef svfloat64x4_t vt4;
typedef float64_t pt;
typedef uint64_t uint;
typedef svuint64_t svuint;
static inline svbool_t pg1(){return svptrue_b64();}
static inline svbool_t pg2(){return svptrue_pat_b64(SV_VL4);}
static inline svbool_t pg4(){return svptrue_pat_b64(SV_VL2);}
static inline vec<uint64_t> tbl_swap(){
//const vec<uint64_t> t = {1, 0, 3, 2, 5, 4, 7, 6};
const vec_imm<uint64_t> t = {1, 0, 3, 2, 5, 4, 7, 6};
return t;
}
static inline vec<uint64_t> tbl0(){
//const vec<uint64_t> t = {4, 5, 6, 7, 0, 1, 2, 3};
const vec_imm<uint64_t> t = {4, 5, 6, 7, 0, 1, 2, 3};
return t;
}
static inline vec<uint64_t> tbl1(){
//const vec<uint64_t> t = {2, 3, 0, 1, 6, 7, 4, 5};
const vec_imm<uint64_t> t = {2, 3, 0, 1, 6, 7, 4, 5};
return t;
}
static inline vec<uint64_t> tbl_exch1a(){ // Exchange1
//const vec<uint64_t> t = {0, 1, 4, 5, 2, 3, 6, 7};
const vec_imm<uint64_t> t = {0, 1, 4, 5, 2, 3, 6, 7};
return t;
}
static inline vec<uint64_t> tbl_exch1b(){ // Exchange1
//const vec<uint64_t> t = {2, 3, 6, 7, 0, 1, 4, 5};
const vec_imm<uint64_t> t = {2, 3, 6, 7, 0, 1, 4, 5};
return t;
}
static inline vec<uint64_t> tbl_exch1c(){ // Exchange1
//const vec<uint64_t> t = {4, 5, 0, 1, 6, 7, 2, 3};
const vec_imm<uint64_t> t = {4, 5, 0, 1, 6, 7, 2, 3};
return t;
}
static inline svbool_t pg_even(){return svzip1_b64(svptrue_b64(), svpfalse_b());}
static inline svbool_t pg_odd() {return svzip1_b64(svpfalse_b(), svptrue_b64());}
static inline svfloat64_t zero(){return svdup_f64(0.);}
};
template <>
struct acle<float>{
typedef svfloat32_t vt;
typedef svfloat32x2_t vt2;
typedef float32_t pt;
typedef uint32_t uint;
typedef svuint32_t svuint;
static inline svbool_t pg1(){return svptrue_b32();}
static inline svbool_t pg2(){return svptrue_pat_b32(SV_VL8);}
// exchange neighboring elements
static inline vec<uint32_t> tbl_swap(){
//const vec<uint32_t> t = {1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14};
const vec_imm<uint32_t> t = {1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14};
return t;
}
static inline vec<uint32_t> tbl0(){
//const vec<uint32_t> t = {8, 9, 10, 11, 12, 13, 14, 15, 0, 1, 2, 3, 4, 5, 6, 7};
const vec_imm<uint32_t> t = {8, 9, 10, 11, 12, 13, 14, 15, 0, 1, 2, 3, 4, 5, 6, 7};
return t;
}
static inline vec<uint32_t> tbl1(){
//const vec<uint32_t> t = {4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11};
const vec_imm<uint32_t> t = {4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11};
return t;
}
static inline vec<uint32_t> tbl2(){
//const vec<uint32_t> t = {2, 3, 0, 1, 6, 7, 4, 5, 10, 11, 8, 9, 14, 15, 12, 13};
const vec_imm<uint32_t> t = {2, 3, 0, 1, 6, 7, 4, 5, 10, 11, 8, 9, 14, 15, 12, 13};
return t;
}
static inline vec<uint32_t> tbl_exch1a(){ // Exchange1
//const vec<uint32_t> t = {0, 1, 2, 3, 8, 9, 10, 11, 4, 5, 6, 7, 12, 13, 14, 15 };
const vec_imm<uint32_t> t = {0, 1, 2, 3, 8, 9, 10, 11, 4, 5, 6, 7, 12, 13, 14, 15 };
return t;
}
static inline vec<uint32_t> tbl_exch1b(){ // Exchange1
//const vec<uint32_t> t = {4, 5, 6, 7, 12, 13, 14, 15, 0, 1, 2, 3, 8, 9, 10, 11 };
const vec_imm<uint32_t> t = {4, 5, 6, 7, 12, 13, 14, 15, 0, 1, 2, 3, 8, 9, 10, 11 };
return t;
}
static inline vec<uint32_t> tbl_exch1c(){ // Exchange1
//const vec<uint32_t> t = {8, 9, 10, 11, 0, 1, 2, 3, 12, 13, 14, 15, 4, 5, 6, 7};
const vec_imm<uint32_t> t = {8, 9, 10, 11, 0, 1, 2, 3, 12, 13, 14, 15, 4, 5, 6, 7};
return t;
}
static inline svbool_t pg_even(){return svzip1_b32(svptrue_b32(), svpfalse_b());}
static inline svbool_t pg_odd() {return svzip1_b32(svpfalse_b(), svptrue_b32());}
static inline svfloat32_t zero(){return svdup_f32(0.);}
};
template <>
struct acle<uint16_t>{
typedef svfloat16_t vt;
typedef float16_t pt;
typedef uint16_t uint;
typedef svuint16_t svuint;
static inline svbool_t pg1(){return svptrue_b16();}
static inline svbool_t pg2(){return svptrue_pat_b16(SV_VL16);}
static inline svbool_t pg_even(){return svzip1_b16(svptrue_b16(), svpfalse_b());}
static inline svbool_t pg_odd() {return svzip1_b16(svpfalse_b(), svptrue_b16());}
static inline svfloat16_t zero(){return svdup_f16(0.);}
};
template <>
struct acle<Integer>{
typedef svuint32_t vt;
typedef svuint32x2_t vt2;
typedef Integer pt;
typedef uint32_t uint;
typedef svuint32_t svuint;
//static inline svbool_t pg1(){return svptrue_b16();}
static inline svbool_t pg1(){return svptrue_b32();}
static inline svbool_t pg2(){return svptrue_pat_b32(SV_VL8);}
static inline svbool_t pg_even(){return svzip1_b32(svptrue_b32(), svpfalse_b());}
static inline svbool_t pg_odd() {return svzip1_b32(svpfalse_b(), svptrue_b32());}
};
// ---------------------------------------------------
struct Vsplat{
// Complex float
inline vecf operator()(float a, float b){
vecf out;
svbool_t pg1 = acle<float>::pg1();
typename acle<float>::vt a_v = svdup_f32(a);
typename acle<float>::vt b_v = svdup_f32(b);
typename acle<float>::vt r_v = svzip1(a_v, b_v);
svst1(pg1, out.v, r_v);
return out;
}
// Real float
inline vecf operator()(float a){
vecf out;
svbool_t pg1 = acle<float>::pg1();
typename acle<float>::vt r_v = svdup_f32(a);
svst1(pg1, out.v, r_v);
return out;
}
// Complex double
inline vecd operator()(double a, double b){
vecd out;
svbool_t pg1 = acle<double>::pg1();
typename acle<double>::vt a_v = svdup_f64(a);
typename acle<double>::vt b_v = svdup_f64(b);
typename acle<double>::vt r_v = svzip1(a_v, b_v);
svst1(pg1, out.v, r_v);
return out;
}
// Real double
inline vecd operator()(double a){
vecd out;
svbool_t pg1 = acle<double>::pg1();
typename acle<double>::vt r_v = svdup_f64(a);
svst1(pg1, out.v, r_v);
return out;
}
// Integer
inline vec<Integer> operator()(Integer a){
vec<Integer> out;
svbool_t pg1 = acle<Integer>::pg1();
// Add check whether Integer is really a uint32_t???
typename acle<Integer>::vt r_v = svdup_u32(a);
svst1(pg1, out.v, r_v);
return out;
}
};
struct Vstore{
// Real
template <typename T>
inline void operator()(vec<T> a, T *D){
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, (typename acle<T>::pt*)&a.v);
svst1(pg1, D, a_v);
}
};
struct Vstream{
// Real
template <typename T>
inline void operator()(T * a, vec<T> b){
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt b_v = svld1(pg1, b.v);
svstnt1(pg1, a, b_v);
//svst1(pg1, a, b_v);
}
};
struct Vset{
// Complex
template <typename T>
inline vec<T> operator()(std::complex<T> *a){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, (T*)a);
svst1(pg1, out.v, a_v);
return out;
}
// Real
template <typename T>
inline vec<T> operator()(T *a){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a);
svst1(pg1, out.v, a_v);
return out;
}
};
/////////////////////////////////////////////////////
// Arithmetic operations
/////////////////////////////////////////////////////
struct Sum{
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
typename acle<T>::vt r_v = svadd_x(pg1, a_v, b_v);
svst1(pg1, out.v, r_v);
return out;
}
};
struct Sub{
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
typename acle<T>::vt r_v = svsub_x(pg1, a_v, b_v);
svst1(pg1, out.v, r_v);
return out;
}
};
struct Mult{
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b, vec<T> c){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
typename acle<T>::vt c_v = svld1(pg1, c.v);
typename acle<T>::vt r_v = svmla_x(pg1, c_v, a_v, b_v);
svst1(pg1, out.v, r_v);
return out;
}
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
typename acle<T>::vt r_v = svmul_x(pg1, a_v, b_v);
svst1(pg1, out.v, r_v);
return out;
}
};
struct MultRealPart{
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
// using FCMLA
typename acle<T>::vt z_v = acle<T>::zero();
typename acle<T>::vt r_v = svcmla_x(pg1, z_v, a_v, b_v, 0);
svst1(pg1, out.v, r_v);
return out;
}
};
struct MaddRealPart{
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b, vec<T> c){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
typename acle<T>::vt c_v = svld1(pg1, c.v);
// using FCMLA
typename acle<T>::vt r_v = svcmla_x(pg1, c_v, a_v, b_v, 0);
svst1(pg1, out.v, r_v);
return out;
}
};
struct MultComplex{
// Complex a*b
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
typename acle<T>::vt z_v = acle<T>::zero();
// using FCMLA
typename acle<T>::vt r_v = svcmla_x(pg1, z_v, a_v, b_v, 0);
r_v = svcmla_x(pg1, r_v, a_v, b_v, 90);
svst1(pg1, out.v, r_v);
return out;
}
};
struct MultAddComplex{
// Complex a*b+c
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b, vec<T> c){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
typename acle<T>::vt c_v = svld1(pg1, c.v);;
// using FCMLA
typename acle<T>::vt r_v = svcmla_x(pg1, c_v, a_v, b_v, 0);
r_v = svcmla_x(pg1, r_v, a_v, b_v, 90);
svst1(pg1, out.v, r_v);
return out;
}
};
struct Div{
// Real
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, a.v);
typename acle<T>::vt b_v = svld1(pg1, b.v);
typename acle<T>::vt r_v = svdiv_x(pg1, a_v, b_v);
svst1(pg1, out.v, r_v);
return out;
}
};
struct Conj{
// Complex
template <typename T>
inline vec<T> operator()(vec<T> a){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
svbool_t pg_odd = acle<T>::pg_odd();
typename acle<T>::vt a_v = svld1(pg1, a.v);
//typename acle<T>::vt r_v = svneg_x(pg_odd, a_v);
typename acle<T>::vt r_v = svneg_m(a_v, pg_odd, a_v);
svst1(pg1, out.v, r_v);
return out;
}
};
struct TimesMinusI{
// Complex
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b){
vec<T> out;
const vec<typename acle<T>::uint> tbl_swap = acle<T>::tbl_swap();
svbool_t pg1 = acle<T>::pg1();
svbool_t pg_odd = acle<T>::pg_odd();
typename acle<T>::svuint tbl_swap_v = svld1(pg1, tbl_swap.v);
typename acle<T>::vt a_v = svld1(pg1, a.v);
a_v = svtbl(a_v, tbl_swap_v);
typename acle<T>::vt r_v = svneg_m(a_v, pg_odd, a_v);
svst1(pg1, out.v, r_v);
return out;
}
};
struct TimesI{
// Complex
template <typename T>
inline vec<T> operator()(vec<T> a, vec<T> b){
vec<T> out;
const vec<typename acle<T>::uint> tbl_swap = acle<T>::tbl_swap();
svbool_t pg1 = acle<T>::pg1();
svbool_t pg_even = acle<T>::pg_even();
typename acle<T>::svuint tbl_swap_v = svld1(pg1, tbl_swap.v);
typename acle<T>::vt a_v = svld1(pg1, a.v);
a_v = svtbl(a_v, tbl_swap_v);
//typename acle<T>::vt r_v = svneg_x(pg_even, a_v);
typename acle<T>::vt r_v = svneg_m(a_v, pg_even, a_v);
svst1(pg1, out.v, r_v);
return out;
}
};
struct PrecisionChange {
static inline vech StoH (const vecf &sa,const vecf &sb) {
vech ret;
svbool_t pg1s = acle<float>::pg1();
svbool_t pg1h = acle<uint16_t>::pg1();
typename acle<float>::vt sa_v = svld1(pg1s, sa.v);
typename acle<float>::vt sb_v = svld1(pg1s, sb.v);
typename acle<uint16_t>::vt ha_v = svcvt_f16_x(pg1s, sa_v);
typename acle<uint16_t>::vt hb_v = svcvt_f16_x(pg1s, sb_v);
typename acle<uint16_t>::vt r_v = svuzp1(ha_v, hb_v);
svst1(pg1h, (typename acle<uint16_t>::pt*)&ret.v, r_v);
return ret;
}
static inline void HtoS(vech h,vecf &sa,vecf &sb) {
svbool_t pg1h = acle<uint16_t>::pg1();
svbool_t pg1s = acle<float>::pg1();
typename acle<uint16_t>::vt h_v = svld1(pg1h, (typename acle<uint16_t>::pt*)&h.v);
typename acle<uint16_t>::vt ha_v = svzip1(h_v, h_v);
typename acle<uint16_t>::vt hb_v = svzip2(h_v, h_v);
typename acle<float>::vt sa_v = svcvt_f32_x(pg1s, ha_v);
typename acle<float>::vt sb_v = svcvt_f32_x(pg1s, hb_v);
svst1(pg1s, sa.v, sa_v);
svst1(pg1s, sb.v, sb_v);
}
static inline vecf DtoS (vecd a,vecd b) {
vecf ret;
svbool_t pg1d = acle<double>::pg1();
svbool_t pg1s = acle<float>::pg1();
typename acle<double>::vt a_v = svld1(pg1d, a.v);
typename acle<double>::vt b_v = svld1(pg1d, b.v);
typename acle<float>::vt sa_v = svcvt_f32_x(pg1d, a_v);
typename acle<float>::vt sb_v = svcvt_f32_x(pg1d, b_v);
typename acle<float>::vt r_v = svuzp1(sa_v, sb_v);
svst1(pg1s, ret.v, r_v);
return ret;
}
static inline void StoD (vecf s,vecd &a,vecd &b) {
svbool_t pg1s = acle<float>::pg1();
svbool_t pg1d = acle<double>::pg1();
typename acle<float>::vt s_v = svld1(pg1s, s.v);
typename acle<float>::vt sa_v = svzip1(s_v, s_v);
typename acle<float>::vt sb_v = svzip2(s_v, s_v);
typename acle<double>::vt a_v = svcvt_f64_x(pg1d, sa_v);
typename acle<double>::vt b_v = svcvt_f64_x(pg1d, sb_v);
svst1(pg1d, a.v, a_v);
svst1(pg1d, b.v, b_v);
}
static inline vech DtoH (vecd a,vecd b,vecd c,vecd d) {
vech ret;
svbool_t pg1d = acle<double>::pg1();
svbool_t pg1h = acle<uint16_t>::pg1();
typename acle<double>::vt a_v = svld1(pg1d, a.v);
typename acle<double>::vt b_v = svld1(pg1d, b.v);
typename acle<double>::vt c_v = svld1(pg1d, c.v);
typename acle<double>::vt d_v = svld1(pg1d, d.v);
typename acle<uint16_t>::vt ha_v = svcvt_f16_x(pg1d, a_v);
typename acle<uint16_t>::vt hb_v = svcvt_f16_x(pg1d, b_v);
typename acle<uint16_t>::vt hc_v = svcvt_f16_x(pg1d, c_v);
typename acle<uint16_t>::vt hd_v = svcvt_f16_x(pg1d, d_v);
typename acle<uint16_t>::vt hab_v = svuzp1(ha_v, hb_v);
typename acle<uint16_t>::vt hcd_v = svuzp1(hc_v, hd_v);
typename acle<uint16_t>::vt r_v = svuzp1(hab_v, hcd_v);
svst1(pg1h, (typename acle<uint16_t>::pt*)&ret.v, r_v);
return ret;
/*
vecf sa,sb;
sa = DtoS(a,b);
sb = DtoS(c,d);
return StoH(sa,sb);
*/
}
static inline void HtoD(vech h,vecd &a,vecd &b,vecd &c,vecd &d) {
svbool_t pg1h = acle<uint16_t>::pg1();
svbool_t pg1d = acle<double>::pg1();
typename acle<uint16_t>::vt h_v = svld1(pg1h, (typename acle<uint16_t>::pt*)&h.v);
typename acle<uint16_t>::vt sa_v = svzip1(h_v, h_v);
typename acle<uint16_t>::vt sb_v = svzip2(h_v, h_v);
typename acle<uint16_t>::vt da_v = svzip1(sa_v, sa_v);
typename acle<uint16_t>::vt db_v = svzip2(sa_v, sa_v);
typename acle<uint16_t>::vt dc_v = svzip1(sb_v, sb_v);
typename acle<uint16_t>::vt dd_v = svzip2(sb_v, sb_v);
typename acle<double>::vt a_v = svcvt_f64_x(pg1d, da_v);
typename acle<double>::vt b_v = svcvt_f64_x(pg1d, db_v);
typename acle<double>::vt c_v = svcvt_f64_x(pg1d, dc_v);
typename acle<double>::vt d_v = svcvt_f64_x(pg1d, dd_v);
svst1(pg1d, a.v, a_v);
svst1(pg1d, b.v, b_v);
svst1(pg1d, c.v, c_v);
svst1(pg1d, d.v, d_v);
/*
vecf sa,sb;
HtoS(h,sa,sb);
StoD(sa,a,b);
StoD(sb,c,d);
*/
}
};
struct Exchange{
// Exchange0 is valid for arbitrary SVE vector length
template <typename T>
static inline void Exchange0(vec<T> &out1, vec<T> &out2, const vec<T> &in1, const vec<T> &in2){
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a1_v = svld1(pg1, in1.v);
typename acle<T>::vt a2_v = svld1(pg1, in2.v);
typename acle<T>::vt r1_v = svext(a1_v, a1_v, (uint64_t)W<T>::c);
r1_v = svext(r1_v, a2_v, (uint64_t)W<T>::c);
typename acle<T>::vt r2_v = svext(a2_v, a2_v, (uint64_t)W<T>::c);
r2_v = svext(a1_v, r2_v, (uint64_t)W<T>::c);
svst1(pg1, out1.v, r1_v);
svst1(pg1, out2.v, r2_v);
}
template <typename T>
static inline void Exchange1(vec<T> &out1, vec<T> &out2, const vec<T> &in1, const vec<T> &in2){
// this one is tricky; svtrn2q* from SVE2 fits best, but it is not available in SVE1
// alternative: use 4-el structure; expect translation into ldp + stp -> SFI
svbool_t pg1 = acle<T>::pg1();
const vec<typename acle<T>::uint> tbl_exch1a = acle<T>::tbl_exch1a();
const vec<typename acle<T>::uint> tbl_exch1b = acle<T>::tbl_exch1b();
const vec<typename acle<T>::uint> tbl_exch1c = acle<T>::tbl_exch1c();
typename acle<T>::svuint tbl_exch1a_v = svld1(pg1, tbl_exch1a.v);
typename acle<T>::svuint tbl_exch1b_v = svld1(pg1, tbl_exch1b.v);
typename acle<T>::svuint tbl_exch1c_v = svld1(pg1, tbl_exch1c.v);
typename acle<T>::vt in1_v = svld1(pg1, in1.v);
typename acle<T>::vt in2_v = svld1(pg1, in2.v);
typename acle<T>::vt a1_v = svtbl(in1_v, tbl_exch1a_v);
typename acle<T>::vt a2_v = svtbl(in2_v, tbl_exch1b_v);
typename acle<T>::vt b1_v = svext(a2_v, a1_v, (uint64_t)(W<T>::r / 2u));
typename acle<T>::vt b2_v = svext(a1_v, a2_v, (uint64_t)(W<T>::r / 2u));
typename acle<T>::vt out1_v = svtbl(b1_v, tbl_exch1c_v);
typename acle<T>::vt out2_v = svtbl(b2_v, tbl_exch1a_v);
svst1(pg1, out1.v, out1_v);
svst1(pg1, out2.v, out2_v);
}
template <typename T>
static inline void Exchange2(vec<T> &out1, vec<T> &out2, const vec<T> &in1, const vec<T> &in2){
svbool_t pg1 = acle<double>::pg1();
typename acle<double>::vt a1_v = svld1(pg1, (typename acle<double>::pt*)in1.v);
typename acle<double>::vt a2_v = svld1(pg1, (typename acle<double>::pt*)in2.v);
typename acle<double>::vt r1_v = svtrn1(a1_v, a2_v);
typename acle<double>::vt r2_v = svtrn2(a1_v, a2_v);
svst1(pg1, (typename acle<double>::pt*)out1.v, r1_v);
svst1(pg1, (typename acle<double>::pt*)out2.v, r2_v);
}
static inline void Exchange3(vecf &out1, vecf &out2, const vecf &in1, const vecf &in2){
svbool_t pg1 = acle<float>::pg1();
typename acle<float>::vt a1_v = svld1(pg1, in1.v);
typename acle<float>::vt a2_v = svld1(pg1, in2.v);
typename acle<float>::vt r1_v = svtrn1(a1_v, a2_v);
typename acle<float>::vt r2_v = svtrn2(a1_v, a2_v);
svst1(pg1, out1.v, r1_v);
svst1(pg1, out2.v, r2_v);
}
static inline void Exchange3(vecd &out1, vecd &out2, const vecd &in1, const vecd &in2){
assert(0);
return;
}
};
struct Permute{
// Permute0 is valid for any SVE vector width
template <typename T>
static inline vec<T> Permute0(vec<T> in) {
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, in.v);
typename acle<T>::vt r_v = svext(a_v, a_v, (uint64_t)(W<T>::r / 2u));
svst1(pg1, out.v, r_v);
return out;
}
static inline vecd Permute1(vecd in) {
vecd out;
const vec<typename acle<double>::uint> tbl_swap = acle<double>::tbl1();
svbool_t pg1 = acle<double>::pg1();
typename acle<double>::vt a_v = svld1(pg1, in.v);
typename acle<double>::svuint tbl_swap_v = svld1(pg1, tbl_swap.v);
typename acle<double>::vt r_v = svtbl(a_v, tbl_swap_v);
svst1(pg1, out.v, r_v);
return out;
}
static inline vecf Permute1(vecf in) {
vecf out;
const vec<typename acle<float>::uint> tbl_swap = acle<float>::tbl1();
svbool_t pg1 = acle<float>::pg1();
typename acle<float>::vt a_v = svld1(pg1, in.v);
typename acle<float>::svuint tbl_swap_v = svld1(pg1, tbl_swap.v);
typename acle<float>::vt r_v = svtbl(a_v, tbl_swap_v);
svst1(pg1, out.v, r_v);
return out;
}
static inline vecd Permute2(vecd in) {
vecd out;
const vec<typename acle<double>::uint> tbl_swap = acle<double>::tbl_swap();
svbool_t pg1 = acle<double>::pg1();
typename acle<double>::vt a_v = svld1(pg1, in.v);
typename acle<double>::svuint tbl_swap_v = svld1(pg1, tbl_swap.v);
typename acle<double>::vt r_v = svtbl(a_v, tbl_swap_v);
svst1(pg1, out.v, r_v);
return out;
}
static inline vecf Permute2(vecf in) {
vecf out;
const vec<typename acle<float>::uint> tbl_swap = acle<float>::tbl2();
svbool_t pg1 = acle<float>::pg1();
typename acle<float>::vt a_v = svld1(pg1, in.v);
typename acle<float>::svuint tbl_swap_v = svld1(pg1, tbl_swap.v);
typename acle<float>::vt r_v = svtbl(a_v, tbl_swap_v);
svst1(pg1, out.v, r_v);
return out;
}
static inline vecf Permute3(vecf in) {
vecf out;
const vec<typename acle<float>::uint> tbl_swap = acle<float>::tbl_swap();
svbool_t pg1 = acle<float>::pg1();
typename acle<float>::vt a_v = svld1(pg1, in.v);
typename acle<float>::svuint tbl_swap_v = svld1(pg1, tbl_swap.v);
typename acle<float>::vt r_v = svtbl(a_v, tbl_swap_v);
svst1(pg1, out.v, r_v);
return out;
}
static inline vecd Permute3(vecd in) {
return in;
}
};
struct Rotate{
template <int n, typename T> static inline vec<T> tRotate(vec<T> in){
vec<T> out;
svbool_t pg1 = acle<T>::pg1();
typename acle<T>::vt a_v = svld1(pg1, in.v);
typename acle<T>::vt r_v = svext(a_v, a_v, (uint64_t)(n%W<T>::r));
svst1(pg1, out.v, r_v);
return out;
}
template <typename T>
static inline vec<T> rotate(vec<T> in, int n){
switch(n){
case 0: return tRotate<0, T>(in); break;
case 1: return tRotate<1, T>(in); break;
case 2: return tRotate<2, T>(in); break;
case 3: return tRotate<3, T>(in); break;
case 4: return tRotate<4, T>(in); break;
case 5: return tRotate<5, T>(in); break;
case 6: return tRotate<6, T>(in); break;
case 7: return tRotate<7, T>(in); break;
case 8: return tRotate<8, T>(in); break;
case 9: return tRotate<9, T>(in); break;
case 10: return tRotate<10, T>(in); break;
case 11: return tRotate<11, T>(in); break;
case 12: return tRotate<12, T>(in); break;
case 13: return tRotate<13, T>(in); break;
case 14: return tRotate<14, T>(in); break;
case 15: return tRotate<15, T>(in); break;
default: assert(0);
}
}
};
// tree-based reduction
#define svred(pg, v)\
svaddv(pg, v);
// left-to-right reduction
// #define svred(pg, v)\
// svadda(pg, 0, v)
template <typename Out_type, typename In_type>
struct Reduce{
//Need templated class to overload output type
//General form must generate error if compiled
inline Out_type operator()(In_type in){
printf("Error, using wrong Reduce function\n");
exit(1);
return 0;
}
};
//Complex float Reduce
template <>
inline Grid::ComplexF Reduce<Grid::ComplexF, vecf>::operator()(vecf in){
svbool_t pg1 = acle<float>::pg1();
svbool_t pg_even = acle<float>::pg_even();
svbool_t pg_odd = acle<float>::pg_odd();
typename acle<float>::vt a_v = svld1(pg1, in.v);
float a = svred(pg_even, a_v);
float b = svred(pg_odd, a_v);
return Grid::ComplexF(a, b);
}
//Real float Reduce
template <>
inline Grid::RealF Reduce<Grid::RealF, vecf>::operator()(vecf in){
svbool_t pg1 = acle<float>::pg1();
typename acle<float>::vt a_v = svld1(pg1, in.v);
float a = svred(pg1, a_v);
return a;
}
//Complex double Reduce
template <>
inline Grid::ComplexD Reduce<Grid::ComplexD, vecd>::operator()(vecd in){
svbool_t pg1 = acle<double>::pg1();
svbool_t pg_even = acle<double>::pg_even();
svbool_t pg_odd = acle<double>::pg_odd();
typename acle<double>::vt a_v = svld1(pg1, in.v);
double a = svred(pg_even, a_v);
double b = svred(pg_odd, a_v);
return Grid::ComplexD(a, b);
}
//Real double Reduce
template <>
inline Grid::RealD Reduce<Grid::RealD, vecd>::operator()(vecd in){
svbool_t pg1 = acle<double>::pg1();
typename acle<double>::vt a_v = svld1(pg1, in.v);
double a = svred(pg1, a_v);
return a;
}
//Integer Reduce
template <>
inline Integer Reduce<Integer, veci>::operator()(veci in){
svbool_t pg1 = acle<Integer>::pg1();
typename acle<Integer>::vt a_v = svld1(pg1, in.v);
Integer a = svred(pg1, a_v);
return a;
}
#undef svred
#undef vec_imm
NAMESPACE_END(Optimization)
//////////////////////////////////////////////////////////////////////////////////////
// Here assign types
typedef Optimization::vech SIMD_Htype; // Reduced precision type
typedef Optimization::vecf SIMD_Ftype; // Single precision type
typedef Optimization::vecd SIMD_Dtype; // Double precision type
typedef Optimization::veci SIMD_Itype; // Integer type
// prefetch utilities
inline void v_prefetch0(int size, const char *ptr){};
inline void prefetch_HINT_T0(const char *ptr){};
// Function name aliases
typedef Optimization::Vsplat VsplatSIMD;
typedef Optimization::Vstore VstoreSIMD;
typedef Optimization::Vset VsetSIMD;
typedef Optimization::Vstream VstreamSIMD;
template <typename S, typename T> using ReduceSIMD = Optimization::Reduce<S,T>;
// Arithmetic operations
typedef Optimization::Sum SumSIMD;
typedef Optimization::Sub SubSIMD;
typedef Optimization::Div DivSIMD;
typedef Optimization::Mult MultSIMD;
typedef Optimization::MultComplex MultComplexSIMD;
typedef Optimization::MultAddComplex MultAddComplexSIMD;
typedef Optimization::MultRealPart MultRealPartSIMD;
typedef Optimization::MaddRealPart MaddRealPartSIMD;
typedef Optimization::Conj ConjSIMD;
typedef Optimization::TimesMinusI TimesMinusISIMD;
typedef Optimization::TimesI TimesISIMD;
NAMESPACE_END(Grid)

View File

@ -0,0 +1,769 @@
/*************************************************************************************
Grid physics library, www.github.com/paboyle/Grid
Source file: Grid_a64fx-fixedsize.h
Copyright (C) 2020
Author: Nils Meyer <nils.meyer@ur.de> Regensburg University
with support from Arm
Richard Sandiford <richard.sandiford@arm.com>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
See the full license in the file "LICENSE" in the top level distribution directory
*************************************************************************************/
/* END LEGAL */
/////////////////////////////////////////////////////
// Using SVE ACLE with fixed-size data types
/////////////////////////////////////////////////////
// gcc 10 features
#if __ARM_FEATURE_SVE_BITS==512
/* gcc 10.0.1 and gcc 10.1 bug using ACLE data types CAS-159553-Y1K4C6
workaround: use gcc's internal data types, bugfix expected for gcc 10.2
typedef svbool_t pred __attribute__((arm_sve_vector_bits(512)));
typedef svfloat16_t vech __attribute__((arm_sve_vector_bits(512)));
typedef svfloat32_t vecf __attribute__((arm_sve_vector_bits(512)));
typedef svfloat64_t vecd __attribute__((arm_sve_vector_bits(512)));
typedef svuint32_t veci __attribute__((arm_sve_vector_bits(512)));
typedef svuint32_t lutf __attribute__((arm_sve_vector_bits(512))); // LUTs for float
typedef svuint64_t lutd __attribute__((arm_sve_vector_bits(512))); // LUTs for double
*/
typedef __SVBool_t pred __attribute__((arm_sve_vector_bits(512)));
typedef __SVFloat16_t vech __attribute__((arm_sve_vector_bits(512)));
typedef __SVFloat32_t vecf __attribute__((arm_sve_vector_bits(512)));
typedef __SVFloat64_t vecd __attribute__((arm_sve_vector_bits(512)));
typedef __SVUint32_t veci __attribute__((arm_sve_vector_bits(512)));
typedef __SVUint32_t lutf __attribute__((arm_sve_vector_bits(512))); // LUTs for float
typedef __SVUint64_t lutd __attribute__((arm_sve_vector_bits(512))); // LUTs for double
#else
#pragma error("Oops. Illegal SVE vector size!?")
#endif /* __ARM_FEATURE_SVE_BITS */
// low-level API
NAMESPACE_BEGIN(Grid);
NAMESPACE_BEGIN(Optimization);
// convenience union types for tables eliminating loads
union ulutf {
lutf v;
uint32_t s[16];
};
union ulutd {
lutd v;
uint64_t s[8];
};
template <typename T>
struct acle{};
template <>
struct acle<double>{
static inline lutd tbl_swap(){
const ulutd t = { .s = {1, 0, 3, 2, 5, 4, 7, 6} };
return t.v;
}
static inline lutd tbl0(){
const ulutd t = { .s = {4, 5, 6, 7, 0, 1, 2, 3} };
return t.v;
}
static inline lutd tbl1(){
const ulutd t = { .s = {2, 3, 0, 1, 6, 7, 4, 5} };
return t.v;
}
static inline lutd tbl_exch1a(){ // Exchange1
const ulutd t = { .s = {0, 1, 4, 5, 2, 3, 6, 7} };
return t.v;
}
static inline lutd tbl_exch1b(){ // Exchange1
const ulutd t = { .s = {2, 3, 6, 7, 0, 1, 4, 5} };
return t.v;
}
static inline lutd tbl_exch1c(){ // Exchange1
const ulutd t = { .s = {4, 5, 0, 1, 6, 7, 2, 3} };
return t.v;
}
static inline pred pg1(){return svptrue_b64();}
static inline pred pg_even(){return svzip1_b64(svptrue_b64(), svpfalse_b());}
static inline pred pg_odd() {return svzip1_b64(svpfalse_b(), svptrue_b64());}
static inline vecd zero(){return svdup_f64(0.);}
};
template <>
struct acle<float>{
// exchange neighboring elements
static inline lutf tbl_swap(){
const ulutf t = { .s = {1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14} };
return t.v;
}
static inline lutf tbl0(){
const ulutf t = { .s = {8, 9, 10, 11, 12, 13, 14, 15, 0, 1, 2, 3, 4, 5, 6, 7} };
return t.v;
}
static inline lutf tbl1(){
const ulutf t = { .s = {4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11} };
return t.v;
}
static inline lutf tbl2(){
const ulutf t = { .s = {2, 3, 0, 1, 6, 7, 4, 5, 10, 11, 8, 9, 14, 15, 12, 13} };
return t.v;
}
static inline lutf tbl_exch1a(){ // Exchange1
const ulutf t = { .s = {0, 1, 2, 3, 8, 9, 10, 11, 4, 5, 6, 7, 12, 13, 14, 15 } };
return t.v;
}
static inline lutf tbl_exch1b(){ // Exchange1
const ulutf t = { .s = {4, 5, 6, 7, 12, 13, 14, 15, 0, 1, 2, 3, 8, 9, 10, 11 } };
return t.v;
}
static inline lutf tbl_exch1c(){ // Exchange1
const ulutf t = { .s = {8, 9, 10, 11, 0, 1, 2, 3, 12, 13, 14, 15, 4, 5, 6, 7} };
return t.v;
}
static inline pred pg1(){return svptrue_b32();}
static inline pred pg_even(){return svzip1_b32(svptrue_b32(), svpfalse_b());}
static inline pred pg_odd() {return svzip1_b32(svpfalse_b(), svptrue_b32());}
static inline vecf zero(){return svdup_f32(0.);}
};
template <>
struct acle<uint16_t>{
static inline pred pg1(){return svptrue_b16();}
static inline pred pg_even(){return svzip1_b16(svptrue_b16(), svpfalse_b());}
static inline pred pg_odd() {return svzip1_b16(svpfalse_b(), svptrue_b16());}
static inline vech zero(){return svdup_f16(0.);}
};
template <>
struct acle<Integer>{
//static inline svbool_t pg1(){return svptrue_b16();}
static inline pred pg1(){return svptrue_b32();}
static inline pred pg_even(){return svzip1_b32(svptrue_b32(), svpfalse_b());}
static inline pred pg_odd() {return svzip1_b32(svpfalse_b(), svptrue_b32());}
};
// ---------------------------------------------------
struct Vsplat{
// Complex float
inline vecf operator()(float a, float b){
vecf a_v = svdup_f32(a);
vecf b_v = svdup_f32(b);
return svzip1(a_v, b_v);
}
// Real float
inline vecf operator()(float a){
return svdup_f32(a);
}
// Complex double
inline vecd operator()(double a, double b){
vecd a_v = svdup_f64(a);
vecd b_v = svdup_f64(b);
return svzip1(a_v, b_v);
}
// Real double
inline vecd operator()(double a){
return svdup_f64(a);
}
// Integer
inline veci operator()(Integer a){
return svdup_u32(a);
}
};
struct Vstore{
// Real float
inline void operator()(vecf a, float *D){
pred pg1 = acle<float>::pg1();
svst1(pg1, D, a);
}
// Real double
inline void operator()(vecd a, double *D){
pred pg1 = acle<double>::pg1();
svst1(pg1, D, a);
}
// Real float
inline void operator()(veci a, Integer *D){
pred pg1 = acle<Integer>::pg1();
svst1(pg1, D, a);
}
};
struct Vstream{
// Real float
inline void operator()(float * a, vecf b){
pred pg1 = acle<float>::pg1();
svstnt1(pg1, a, b);
//svst1(pg1, a, b);
}
// Real double
inline void operator()(double * a, vecd b){
pred pg1 = acle<double>::pg1();
svstnt1(pg1, a, b);
//svst1(pg1, a, b);
}
};
struct Vset{
// Complex float
inline vecf operator()(Grid::ComplexF *a){
pred pg1 = acle<float>::pg1();
return svld1(pg1, (float*)a);
}
// Complex double
inline vecd operator()(Grid::ComplexD *a){
pred pg1 = acle<double>::pg1();
return svld1(pg1, (double*)a);
}
// Real float
inline vecf operator()(float *a){
pred pg1 = acle<float>::pg1();
return svld1(pg1, a);
}
// Real double
inline vecd operator()(double *a){
pred pg1 = acle<double>::pg1();
return svld1(pg1, a);
}
// Integer
inline veci operator()(Integer *a){
pred pg1 = acle<Integer>::pg1();
return svld1(pg1, a);
}
};
/////////////////////////////////////////////////////
// Arithmetic operations
/////////////////////////////////////////////////////
struct Sum{
// Complex/real float
inline vecf operator()(vecf a, vecf b){
pred pg1 = acle<float>::pg1();
return svadd_x(pg1, a, b);
}
// Complex/real double
inline vecd operator()(vecd a, vecd b){
pred pg1 = acle<double>::pg1();
return svadd_x(pg1, a, b);
}
// Integer
inline veci operator()(veci a, veci b){
pred pg1 = acle<Integer>::pg1();
return svadd_x(pg1, a, b);
}
};
struct Sub{
// Complex/real float
inline vecf operator()(vecf a, vecf b){
pred pg1 = acle<float>::pg1();
return svsub_x(pg1, a, b);
}
// Complex/real double
inline vecd operator()(vecd a, vecd b){
pred pg1 = acle<double>::pg1();
return svsub_x(pg1, a, b);
}
// Integer
inline veci operator()(veci a, veci b){
pred pg1 = acle<Integer>::pg1();
return svsub_x(pg1, a, b);
}
};
struct Mult{
// Real float fma
inline vecf operator()(vecf a, vecf b, vecf c){
pred pg1 = acle<float>::pg1();
return svmad_x(pg1, b, c, a);
}
// Real double fma
inline vecd operator()(vecd a, vecd b, vecd c){
pred pg1 = acle<double>::pg1();
return svmad_x(pg1, b, c, a);
}
// Real float
inline vecf operator()(vecf a, vecf b){
pred pg1 = acle<float>::pg1();
return svmul_x(pg1, a, b);
}
// Real double
inline vecd operator()(vecd a, vecd b){
pred pg1 = acle<double>::pg1();
return svmul_x(pg1, a, b);
}
// Integer
inline veci operator()(veci a, veci b){
pred pg1 = acle<Integer>::pg1();
return svmul_x(pg1, a, b);
}
};
struct MultRealPart{
// Complex float
inline vecf operator()(vecf a, vecf b){
pred pg1 = acle<float>::pg1();
// using FCMLA
vecf z_v = acle<float>::zero();
return svcmla_x(pg1, z_v, a, b, 0);
}
// Complex double
inline vecd operator()(vecd a, vecd b){
pred pg1 = acle<double>::pg1();
// using FCMLA
vecd z_v = acle<double>::zero();
return svcmla_x(pg1, z_v, a, b, 0);
}
};
struct MaddRealPart{
// Complex float
inline vecf operator()(vecf a, vecf b, vecf c){
pred pg1 = acle<float>::pg1();
// using FCMLA
return svcmla_x(pg1, c, a, b, 0);
}
// Complex double
inline vecd operator()(vecd a, vecd b, vecd c){
pred pg1 = acle<double>::pg1();
// using FCMLA
return svcmla_x(pg1, c, a, b, 0);
}
};
struct MultComplex{
// Complex a*b
// Complex float
inline vecf operator()(vecf a, vecf b){
pred pg1 = acle<float>::pg1();
vecf z = acle<float>::zero();
// using FCMLA
vecf r_v = svcmla_x(pg1, z, a, b, 0);
return svcmla_x(pg1, r_v, a, b, 90);
}
// Complex double
inline vecd operator()(vecd a, vecd b){
pred pg1 = acle<double>::pg1();
vecd z = acle<double>::zero();
// using FCMLA
vecd r_v = svcmla_x(pg1, z, a, b, 0);
return svcmla_x(pg1, r_v, a, b, 90);
}
};
struct MultAddComplex{
// Complex a*b+c
// Complex float
inline vecf operator()(vecf a, vecf b, vecf c){
pred pg1 = acle<float>::pg1();
// using FCMLA
vecf r_v = svcmla_x(pg1, c, a, b, 0);
return svcmla_x(pg1, r_v, a, b, 90);
}
// Complex double
inline vecd operator()(vecd a, vecd b, vecd c){
pred pg1 = acle<double>::pg1();
// using FCMLA
vecd r_v = svcmla_x(pg1, c, a, b, 0);
return svcmla_x(pg1, r_v, a, b, 90);
}
};
struct Div{
// Real float
inline vecf operator()(vecf a, vecf b){
pred pg1 = acle<float>::pg1();
return svdiv_x(pg1, a, b);
}
// Real double
inline vecd operator()(vecd a, vecd b){
pred pg1 = acle<double>::pg1();
return svdiv_x(pg1, a, b);
}
};
struct Conj{
// Complex float
inline vecf operator()(vecf a){
pred pg_odd = acle<float>::pg_odd();
//return svneg_x(pg_odd, a); this is unsafe
return svneg_m(a, pg_odd, a);
}
// Complex double
inline vecd operator()(vecd a){
pred pg_odd = acle<double>::pg_odd();
//return svneg_x(pg_odd, a); this is unsafe
return svneg_m(a, pg_odd, a);
}
};
struct TimesMinusI{
// Complex float
inline vecf operator()(vecf a, vecf b){
lutf tbl_swap = acle<float>::tbl_swap();
pred pg1 = acle<float>::pg1();
pred pg_odd = acle<float>::pg_odd();
vecf a_v = svtbl(a, tbl_swap);
//return svneg_x(pg_odd, a_v); this is unsafe
return svneg_m(a_v, pg_odd, a_v);
}
// Complex double
inline vecd operator()(vecd a, vecd b){
lutd tbl_swap = acle<double>::tbl_swap();
pred pg1 = acle<double>::pg1();
pred pg_odd = acle<double>::pg_odd();
vecd a_v = svtbl(a, tbl_swap);
//return svneg_x(pg_odd, a_v); this is unsafe
return svneg_m(a_v, pg_odd, a_v);
}
};
struct TimesI{
// Complex float
inline vecf operator()(vecf a, vecf b){
lutf tbl_swap = acle<float>::tbl_swap();
pred pg1 = acle<float>::pg1();
pred pg_even = acle<float>::pg_even();
vecf a_v = svtbl(a, tbl_swap);
//return svneg_x(pg_even, a_v); this is unsafe
return svneg_m(a_v, pg_even, a_v);
}
// Complex double
inline vecd operator()(vecd a, vecd b){
lutd tbl_swap = acle<double>::tbl_swap();
pred pg1 = acle<double>::pg1();
pred pg_even = acle<double>::pg_even();
vecd a_v = svtbl(a, tbl_swap);
//return svneg_x(pg_even, a_v); this is unsafe
return svneg_m(a_v, pg_even, a_v);
}
};
struct PrecisionChange {
static inline vech StoH (vecf sa, vecf sb) {
pred pg1s = acle<float>::pg1();
vech ha_v = svcvt_f16_x(pg1s, sa);
vech hb_v = svcvt_f16_x(pg1s, sb);
return svuzp1(ha_v, hb_v);
}
static inline void HtoS(vech h,vecf &sa,vecf &sb) {
pred pg1s = acle<float>::pg1();
vech ha_v = svzip1(h, h);
vech hb_v = svzip2(h, h);
sa = svcvt_f32_x(pg1s, ha_v);
sb = svcvt_f32_x(pg1s, hb_v);
}
static inline vecf DtoS (vecd a,vecd b) {
pred pg1d = acle<double>::pg1();
vecf sa_v = svcvt_f32_x(pg1d, a);
vecf sb_v = svcvt_f32_x(pg1d, b);
return svuzp1(sa_v, sb_v);
}
static inline void StoD (vecf s,vecd &a,vecd &b) {
pred pg1d = acle<double>::pg1();
vecf sa_v = svzip1(s, s);
vecf sb_v = svzip2(s, s);
a = svcvt_f64_x(pg1d, sa_v);
b = svcvt_f64_x(pg1d, sb_v);
}
static inline vech DtoH (vecd a,vecd b,vecd c,vecd d) {
pred pg1d = acle<double>::pg1();
pred pg1h = acle<uint16_t>::pg1();
vech ha_v = svcvt_f16_x(pg1d, a);
vech hb_v = svcvt_f16_x(pg1d, b);
vech hc_v = svcvt_f16_x(pg1d, c);
vech hd_v = svcvt_f16_x(pg1d, d);
vech hab_v = svuzp1(ha_v, hb_v);
vech hcd_v = svuzp1(hc_v, hd_v);
return svuzp1(hab_v, hcd_v);
/*
vecf sa,sb;
sa = DtoS(a,b);
sb = DtoS(c,d);
return StoH(sa,sb);
*/
}
static inline void HtoD(vech h,vecd &a,vecd &b,vecd &c,vecd &d) {
pred pg1h = acle<uint16_t>::pg1();
pred pg1d = acle<double>::pg1();
vech sa_v = svzip1(h, h);
vech sb_v = svzip2(h, h);
vech da_v = svzip1(sa_v, sa_v);
vech db_v = svzip2(sa_v, sa_v);
vech dc_v = svzip1(sb_v, sb_v);
vech dd_v = svzip2(sb_v, sb_v);
a = svcvt_f64_x(pg1d, da_v);
b = svcvt_f64_x(pg1d, db_v);
c = svcvt_f64_x(pg1d, dc_v);
d = svcvt_f64_x(pg1d, dd_v);
/*
vecf sa,sb;
HtoS(h,sa,sb);
StoD(sa,a,b);
StoD(sb,c,d);
*/
}
};
struct Exchange{
// float
static inline void Exchange0(vecf &out1, vecf &out2, vecf in1, vecf in2){
vecf r1_v = svext(in1, in1, (uint64_t)8u);
vecf r2_v = svext(in2, in2, (uint64_t)8u);
out1 = svext(r1_v, in2, (uint64_t)8u);
out2 = svext(in1, r2_v, (uint64_t)8u);
}
static inline void Exchange1(vecf &out1, vecf &out2, vecf in1, vecf in2){
// this one is tricky; svtrn2q* from SVE2 fits best, but it is not available in SVE1
// alternative: use 4-el structure; expect translation into 4x ldp + 4x stp -> SFI
lutf tbl_exch1a = acle<float>::tbl_exch1a();
lutf tbl_exch1b = acle<float>::tbl_exch1b();
lutf tbl_exch1c = acle<float>::tbl_exch1c();
vecf a1_v = svtbl(in1, tbl_exch1a);
vecf a2_v = svtbl(in2, tbl_exch1b);
vecf b1_v = svext(a2_v, a1_v, (uint64_t)8u);
vecf b2_v = svext(a1_v, a2_v, (uint64_t)8u);
out1 = svtbl(b1_v, tbl_exch1c);
out2 = svtbl(b2_v, tbl_exch1a);
}
static inline void Exchange2(vecf &out1, vecf &out2, vecf in1, vecf in2){
out1 = (vecf)svtrn1((vecd)in1, (vecd)in2);
out2 = (vecf)svtrn2((vecd)in1, (vecd)in2);
}
static inline void Exchange3(vecf &out1, vecf &out2, vecf in1, vecf in2){
out1 = svtrn1(in1, in2);
out2 = svtrn2(in1, in2);
}
// double
static inline void Exchange0(vecd &out1, vecd &out2, vecd in1, vecd in2){
vecd r1_v = svext(in1, in1, (uint64_t)4u);
vecd r2_v = svext(in2, in2, (uint64_t)4u);
out1 = svext(r1_v, in2, (uint64_t)4u);
out2 = svext(in1, r2_v, (uint64_t)4u);
}
static inline void Exchange1(vecd &out1, vecd &out2, vecd in1, vecd in2){
// this one is tricky; svtrn2q* from SVE2 fits best, but it is not available in SVE1
// alternative: use 4-el structure; expect translation into 4x ldp + 4x stp -> SFI
lutd tbl_exch1a = acle<double>::tbl_exch1a();
lutd tbl_exch1b = acle<double>::tbl_exch1b();
lutd tbl_exch1c = acle<double>::tbl_exch1c();
vecd a1_v = svtbl(in1, tbl_exch1a);
vecd a2_v = svtbl(in2, tbl_exch1b);
vecd b1_v = svext(a2_v, a1_v, (uint64_t)4u);
vecd b2_v = svext(a1_v, a2_v, (uint64_t)4u);
out1 = svtbl(b1_v, tbl_exch1c);
out2 = svtbl(b2_v, tbl_exch1a);
}
static inline void Exchange2(vecd &out1, vecd &out2, vecd in1, vecd in2){
out1 = svtrn1(in1, in2);
out2 = svtrn2(in1, in2);
}
static inline void Exchange3(vecd &out1, vecd &out2, vecd in1, vecd in2){
assert(0);
return;
}
};
#undef VECTOR_FOR
struct Permute{
// float
static inline vecf Permute0(vecf in) {
return svext(in, in, (uint64_t)8u);
}
static inline vecf Permute1(vecf in) {
lutf tbl_swap = acle<float>::tbl1();
return svtbl(in, tbl_swap);
}
static inline vecf Permute2(vecf in) {
lutf tbl_swap = acle<float>::tbl2();
return svtbl(in, tbl_swap);
}
static inline vecf Permute3(vecf in) {
lutf tbl_swap = acle<float>::tbl_swap();
return svtbl(in, tbl_swap);
}
// double
static inline vecd Permute0(vecd in) {
return svext(in, in, (uint64_t)(8u / 2u));
}
static inline vecd Permute1(vecd in) {
lutd tbl_swap = acle<double>::tbl1();
return svtbl(in, tbl_swap);
}
static inline vecd Permute2(vecd in) {
lutd tbl_swap = acle<double>::tbl_swap();
return svtbl(in, tbl_swap);
}
static inline vecd Permute3(vecd in) {
return in;
}
};
struct Rotate{
static inline vecf rotate(vecf in, int n){
switch(n){
case 0: return tRotate<0>(in); break;
case 1: return tRotate<1>(in); break;
case 2: return tRotate<2>(in); break;
case 3: return tRotate<3>(in); break;
case 4: return tRotate<4>(in); break;
case 5: return tRotate<5>(in); break;
case 6: return tRotate<6>(in); break;
case 7: return tRotate<7>(in); break;
case 8: return tRotate<8>(in); break;
case 9: return tRotate<9>(in); break;
case 10: return tRotate<10>(in); break;
case 11: return tRotate<11>(in); break;
case 12: return tRotate<12>(in); break;
case 13: return tRotate<13>(in); break;
case 14: return tRotate<14>(in); break;
case 15: return tRotate<15>(in); break;
default: assert(0);
}
}
static inline vecd rotate(vecd in, int n){
switch(n){
case 0: return tRotate<0>(in); break;
case 1: return tRotate<1>(in); break;
case 2: return tRotate<2>(in); break;
case 3: return tRotate<3>(in); break;
case 4: return tRotate<4>(in); break;
case 5: return tRotate<5>(in); break;
case 6: return tRotate<6>(in); break;
case 7: return tRotate<7>(in); break;
default: assert(0);
}
}
template <int n> static inline vecf tRotate(vecf in){
return svext(in, in, (uint64_t)n);
}
template <int n> static inline vecd tRotate(vecd in){
return svext(in, in, (uint64_t)n);
}
};
// tree-based reduction
#define svred(pg, v)\
svaddv(pg, v);
// left-to-right reduction
// #define svred(pg, v)\
// svadda(pg, 0, v)
template <typename Out_type, typename In_type>
struct Reduce{
//Need templated class to overload output type
//General form must generate error if compiled
inline Out_type operator()(In_type in){
printf("Error, using wrong Reduce function\n");
//exit(1);
return 0;
}
};
//Complex float Reduce
template <>
inline Grid::ComplexF Reduce<Grid::ComplexF, vecf>::operator()(vecf in){
pred pg_even = acle<float>::pg_even();
pred pg_odd = acle<float>::pg_odd();
float a = svred(pg_even, in);
float b = svred(pg_odd, in);
return Grid::ComplexF(a, b);
}
//Real float Reduce
template <>
inline Grid::RealF Reduce<Grid::RealF, vecf>::operator()(vecf in){
pred pg1 = acle<float>::pg1();
return svred(pg1, in);
}
//Complex double Reduce
template <>
inline Grid::ComplexD Reduce<Grid::ComplexD, vecd>::operator()(vecd in){
pred pg_even = acle<double>::pg_even();
pred pg_odd = acle<double>::pg_odd();
double a = svred(pg_even, in);
double b = svred(pg_odd, in);
return Grid::ComplexD(a, b);
}
//Real double Reduce
template <>
inline Grid::RealD Reduce<Grid::RealD, vecd>::operator()(vecd in){
pred pg1 = acle<double>::pg1();
return svred(pg1, in);
}
//Integer Reduce
template <>
inline Integer Reduce<Integer, veci>::operator()(veci in){
pred pg1 = acle<Integer>::pg1();
return svred(pg1, in);
}
#undef svred
NAMESPACE_END(Optimization);
//////////////////////////////////////////////////////////////////////////////////////
// Here assign types
typedef vech SIMD_Htype; // Reduced precision type
typedef vecf SIMD_Ftype; // Single precision type
typedef vecd SIMD_Dtype; // Double precision type
typedef veci SIMD_Itype; // Integer type
// prefetch utilities
inline void v_prefetch0(int size, const char *ptr){};
inline void prefetch_HINT_T0(const char *ptr){};
// Function name aliases
typedef Optimization::Vsplat VsplatSIMD;
typedef Optimization::Vstore VstoreSIMD;
typedef Optimization::Vset VsetSIMD;
typedef Optimization::Vstream VstreamSIMD;
template <typename S, typename T> using ReduceSIMD = Optimization::Reduce<S,T>;
// Arithmetic operations
typedef Optimization::Sum SumSIMD;
typedef Optimization::Sub SubSIMD;
typedef Optimization::Div DivSIMD;
typedef Optimization::Mult MultSIMD;
typedef Optimization::MultComplex MultComplexSIMD;
typedef Optimization::MultAddComplex MultAddComplexSIMD;
typedef Optimization::MultRealPart MultRealPartSIMD;
typedef Optimization::MaddRealPart MaddRealPartSIMD;
typedef Optimization::Conj ConjSIMD;
typedef Optimization::TimesMinusI TimesMinusISIMD;
typedef Optimization::TimesI TimesISIMD;
NAMESPACE_END(Grid);

View File

@ -110,9 +110,63 @@ accelerator_inline Grid_half sfw_float_to_half(float ff) {
#ifdef GPU_VEC #ifdef GPU_VEC
#include "Grid_gpu_vec.h" #include "Grid_gpu_vec.h"
#endif #endif
/*
#ifdef GEN #ifdef GEN
#include "Grid_generic.h" #include "Grid_generic.h"
#endif #endif
*/
#ifdef GEN
#if defined(A64FX) || defined(A64FXFIXEDSIZE) // breakout A64FX SVE ACLE here
#include <arm_sve.h>
#if defined(A64FX) // VLA
#pragma message("building A64FX / SVE ACLE VLA")
#if defined(ARMCLANGCOMPAT)
#pragma message("applying data types patch")
#endif
#include "Grid_a64fx-2.h"
#endif
#if defined(A64FXFIXEDSIZE) // fixed size data types
#pragma message("building for A64FX / SVE ACLE fixed size")
#include "Grid_a64fx-fixedsize.h"
#endif
#else
//#pragma message("building GEN") // generic
#include "Grid_generic.h"
#endif
#endif
#ifdef A64FX
#include <arm_sve.h>
#ifdef __ARM_FEATURE_SVE_BITS
//#pragma message("building A64FX SVE VLS")
#include "Grid_a64fx-fixedsize.h"
#else
#pragma message("building A64FX SVE VLA")
#if defined(ARMCLANGCOMPAT)
#pragma message("applying data types patch")
#endif
#include "Grid_a64fx-2.h"
#endif
#endif
/*
#ifdef A64FXVLA
#pragma message("building A64FX VLA")
#if defined(ARMCLANGCOMPAT)
#pragma message("applying data types patch")
#endif
#include <arm_sve.h>
#include "Grid_a64fx-2.h"
#endif
#ifdef A64FXVLS
#pragma message("building A64FX VLS")
#include <arm_sve.h>
#include "Grid_a64fx-fixedsize.h"
#endif
*/
#ifdef SSE4 #ifdef SSE4
#include "Grid_sse4.h" #include "Grid_sse4.h"
#endif #endif
@ -163,6 +217,12 @@ template <typename T> struct is_complex : public std::false_type {};
template <> struct is_complex<ComplexD> : public std::true_type {}; template <> struct is_complex<ComplexD> : public std::true_type {};
template <> struct is_complex<ComplexF> : public std::true_type {}; template <> struct is_complex<ComplexF> : public std::true_type {};
template <typename T> struct is_ComplexD : public std::false_type {};
template <> struct is_ComplexD<ComplexD> : public std::true_type {};
template <typename T> struct is_ComplexF : public std::false_type {};
template <> struct is_ComplexF<ComplexF> : public std::true_type {};
template<typename T, typename V=void> struct is_real : public std::false_type {}; template<typename T, typename V=void> struct is_real : public std::false_type {};
template<typename T> struct is_real<T, typename std::enable_if<std::is_floating_point<T>::value, template<typename T> struct is_real<T, typename std::enable_if<std::is_floating_point<T>::value,
void>::type> : public std::true_type {}; void>::type> : public std::true_type {};
@ -223,6 +283,69 @@ public:
return sizeof(Vector_type) / sizeof(Scalar_type); return sizeof(Vector_type) / sizeof(Scalar_type);
} }
#ifdef ARMCLANGCOMPAT
template <class S = Scalar_type>
accelerator_inline Grid_simd &operator=(const Grid_simd<typename std::enable_if<!is_complex<S>::value, S>::type, Vector_type> &&rhs) {
//v = rhs.v;
svst1(svptrue_b8(), (Scalar_type*)this, svld1(svptrue_b8(), (Scalar_type*)&(rhs.v)));
return *this;
};
template <class S = Scalar_type>
accelerator_inline Grid_simd &operator=(const Grid_simd<typename std::enable_if<!is_complex<S>::value, S>::type, Vector_type> &rhs) {
//v = rhs.v;
svst1(svptrue_b8(), (Scalar_type*)this, svld1(svptrue_b8(), (Scalar_type*)&(rhs.v)));
return *this;
};
/*
template <class S = Scalar_type>
accelerator_inline Grid_simd &operator=(const Grid_simd<typename std::enable_if<is_complex<S>::value, S>::type, Vector_type> &&rhs) {
//v = rhs.v;
svst1(svptrue_b8(), (int8_t*)this, svld1(svptrue_b8(), (int8_t*)&(rhs.v)));
return *this;
};
template <class S = Scalar_type>
accelerator_inline Grid_simd &operator=(const Grid_simd<typename std::enable_if<is_complex<S>::value, S>::type, Vector_type> &rhs) {
//v = rhs.v;
svst1(svptrue_b8(), (int8_t*)this, svld1(svptrue_b8(), (int8_t*)&(rhs.v)));
return *this;
};
*/
// ComplexF
template <class S = Scalar_type>
accelerator_inline Grid_simd &operator=(const Grid_simd<typename std::enable_if<is_ComplexF<S>::value, S>::type, Vector_type> &&rhs) {
//v = rhs.v;
svst1(svptrue_b32(), (float*)this, svld1(svptrue_b32(), (float*)&(rhs.v)));
return *this;
};
template <class S = Scalar_type>
accelerator_inline Grid_simd &operator=(const Grid_simd<typename std::enable_if<is_ComplexF<S>::value, S>::type, Vector_type> &rhs) {
//v = rhs.v;
svst1(svptrue_b32(), (float*)this, svld1(svptrue_b32(), (float*)&(rhs.v)));
return *this;
};
// ComplexD
template <class S = Scalar_type>
accelerator_inline Grid_simd &operator=(const Grid_simd<typename std::enable_if<is_ComplexD<S>::value, S>::type, Vector_type> &&rhs) {
//v = rhs.v;
svst1(svptrue_b64(), (double*)this, svld1(svptrue_b64(), (double*)&(rhs.v)));
return *this;
};
template <class S = Scalar_type>
accelerator_inline Grid_simd &operator=(const Grid_simd<typename std::enable_if<is_ComplexD<S>::value, S>::type, Vector_type> &rhs) {
//v = rhs.v;
svst1(svptrue_b64(), (double*)this, svld1(svptrue_b64(), (double*)&(rhs.v)));
return *this;
};
#else
accelerator_inline Grid_simd &operator=(const Grid_simd &&rhs) { accelerator_inline Grid_simd &operator=(const Grid_simd &&rhs) {
v = rhs.v; v = rhs.v;
return *this; return *this;
@ -232,10 +355,23 @@ public:
return *this; return *this;
}; // faster than not declaring it and leaving to the compiler }; // faster than not declaring it and leaving to the compiler
#endif
accelerator Grid_simd() = default; accelerator Grid_simd() = default;
#ifdef ARMCLANGCOMPAT
template <class S = Scalar_type>
accelerator_inline Grid_simd(const Grid_simd<typename std::enable_if<!is_complex<S>::value, S>::type, Vector_type> &rhs) { this->operator=(rhs); }
template <class S = Scalar_type>
accelerator_inline Grid_simd(const Grid_simd<typename std::enable_if<!is_complex<S>::value, S>::type, Vector_type> &&rhs) { this->operator=(rhs); }
template <class S = Scalar_type>
accelerator_inline Grid_simd(const Grid_simd<typename std::enable_if<is_complex<S>::value, S>::type, Vector_type> &rhs) { this->operator=(rhs); }
template <class S = Scalar_type>
accelerator_inline Grid_simd(const Grid_simd<typename std::enable_if<is_complex<S>::value, S>::type, Vector_type> &&rhs) { this->operator=(rhs); }
#else
accelerator_inline Grid_simd(const Grid_simd &rhs) : v(rhs.v){}; // compiles in movaps accelerator_inline Grid_simd(const Grid_simd &rhs) : v(rhs.v){}; // compiles in movaps
accelerator_inline Grid_simd(const Grid_simd &&rhs) : v(rhs.v){}; accelerator_inline Grid_simd(const Grid_simd &&rhs) : v(rhs.v){};
#endif
accelerator_inline Grid_simd(const Real a) { vsplat(*this, Scalar_type(a)); }; accelerator_inline Grid_simd(const Real a) { vsplat(*this, Scalar_type(a)); };
// Enable if complex type // Enable if complex type
template <typename S = Scalar_type> accelerator_inline template <typename S = Scalar_type> accelerator_inline
@ -258,11 +394,20 @@ public:
/////////////////////////////////////////////// ///////////////////////////////////////////////
// FIXME -- alias this to an accelerator_inline MAC struct. // FIXME -- alias this to an accelerator_inline MAC struct.
#if defined(A64FX) || defined(A64FXFIXEDSIZE)
friend accelerator_inline void mac(Grid_simd *__restrict__ y,
const Grid_simd *__restrict__ a,
const Grid_simd *__restrict__ x) {
*y = fxmac((*a), (*x), (*y));
};
#else
friend accelerator_inline void mac(Grid_simd *__restrict__ y, friend accelerator_inline void mac(Grid_simd *__restrict__ y,
const Grid_simd *__restrict__ a, const Grid_simd *__restrict__ a,
const Grid_simd *__restrict__ x) { const Grid_simd *__restrict__ x) {
*y = (*a) * (*x) + (*y); *y = (*a) * (*x) + (*y);
}; };
#endif
friend accelerator_inline void mult(Grid_simd *__restrict__ y, friend accelerator_inline void mult(Grid_simd *__restrict__ y,
const Grid_simd *__restrict__ l, const Grid_simd *__restrict__ l,
@ -741,6 +886,27 @@ accelerator_inline Grid_simd<S, V> operator*(Grid_simd<S, V> a, Grid_simd<S, V>
return ret; return ret;
}; };
// ---------------- A64FX MAC -------------------
// Distinguish between complex types and others
#if defined(A64FX) || defined(A64FXFIXEDSIZE)
template <class S, class V, IfComplex<S> = 0>
accelerator_inline Grid_simd<S, V> fxmac(Grid_simd<S, V> a, Grid_simd<S, V> b, Grid_simd<S, V> c) {
Grid_simd<S, V> ret;
ret.v = trinary<V>(a.v, b.v, c.v, MultAddComplexSIMD());
return ret;
};
// Real/Integer types
template <class S, class V, IfNotComplex<S> = 0>
accelerator_inline Grid_simd<S, V> fxmac(Grid_simd<S, V> a, Grid_simd<S, V> b, Grid_simd<S, V> c) {
Grid_simd<S, V> ret;
ret.v = trinary<V>(a.v, b.v, c.v, MultSIMD());
return ret;
};
#endif
// ----------------------------------------------
// Distinguish between complex types and others // Distinguish between complex types and others
template <class S, class V, IfComplex<S> = 0> template <class S, class V, IfComplex<S> = 0>
accelerator_inline Grid_simd<S, V> operator/(Grid_simd<S, V> a, Grid_simd<S, V> b) { accelerator_inline Grid_simd<S, V> operator/(Grid_simd<S, V> a, Grid_simd<S, V> b) {
@ -919,6 +1085,14 @@ accelerator_inline void precisionChange(vRealD *out,vRealF *in,int nvec)
for(int m=0;m*2<nvec;m++){ for(int m=0;m*2<nvec;m++){
int n=m*2; int n=m*2;
Optimization::PrecisionChange::StoD(in[m].v,out[n].v,out[n+1].v); Optimization::PrecisionChange::StoD(in[m].v,out[n].v,out[n+1].v);
// Bug in gcc 10.0.1 and gcc 10.1 using fixed-size SVE ACLE data types CAS-159553-Y1K4C6
// function call results in compile-time error:
// In function void Grid::precisionChange(Grid::vRealD*, Grid::vRealF*, int):
// .../Grid_vector_types.h:961:56: error:
// cannot bind non-const lvalue reference of type vecd& {aka svfloat64_t&}
// to an rvalue of type vecd {aka svfloat64_t}
// 961 | Optimization::PrecisionChange::StoD(in[m].v,out[n].v,out[n+1].v);
// | ~~~~~~~^
} }
} }
accelerator_inline void precisionChange(vRealD *out,vRealH *in,int nvec) accelerator_inline void precisionChange(vRealD *out,vRealH *in,int nvec)

View File

@ -93,6 +93,11 @@ accelerator_inline ComplexF pow(const ComplexF& r,RealF y){ return(std::pow(r,y)
using std::abs; using std::abs;
using std::pow; using std::pow;
using std::sqrt; using std::sqrt;
using std::log;
using std::exp;
using std::sin;
using std::cos;
accelerator_inline RealF conjugate(const RealF & r){ return r; } accelerator_inline RealF conjugate(const RealF & r){ return r; }
accelerator_inline RealD conjugate(const RealD & r){ return r; } accelerator_inline RealD conjugate(const RealD & r){ return r; }

2377
Grid/simd/gridverter.py Executable file

File diff suppressed because it is too large Load Diff

View File

@ -272,7 +272,7 @@ public:
static auto traceIndex(const iVector<vtype,N> arg) -> iScalar<RemoveCRV(arg._internal[0])> static auto traceIndex(const iVector<vtype,N> arg) -> iScalar<RemoveCRV(arg._internal[0])>
{ {
iScalar<RemoveCRV(arg._internal[0])> ret; iScalar<RemoveCRV(arg._internal[0])> ret;
ret._internal=Zero(); zeroit(ret);
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
ret._internal = ret._internal+ arg._internal[i]; ret._internal = ret._internal+ arg._internal[i];
} }

View File

@ -190,6 +190,36 @@ NAMESPACE_BEGIN(Grid);
typedef ComplexD DoublePrecision; typedef ComplexD DoublePrecision;
typedef ComplexD DoublePrecision2; typedef ComplexD DoublePrecision2;
}; };
#ifdef GRID_CUDA
template<> struct GridTypeMapper<std::complex<float> > : public GridTypeMapper_Base {
typedef std::complex<float> scalar_type;
typedef std::complex<double> scalar_typeD;
typedef scalar_type vector_type;
typedef scalar_typeD vector_typeD;
typedef scalar_type tensor_reduced;
typedef scalar_type scalar_object;
typedef scalar_typeD scalar_objectD;
typedef scalar_type Complexified;
typedef RealF Realified;
typedef scalar_typeD DoublePrecision;
typedef scalar_typeD DoublePrecision2;
};
template<> struct GridTypeMapper<std::complex<double> > : public GridTypeMapper_Base {
typedef std::complex<double> scalar_type;
typedef std::complex<double> scalar_typeD;
typedef scalar_type vector_type;
typedef scalar_typeD vector_typeD;
typedef scalar_type tensor_reduced;
typedef scalar_type scalar_object;
typedef scalar_typeD scalar_objectD;
typedef scalar_type Complexified;
typedef RealD Realified;
typedef scalar_typeD DoublePrecision;
typedef scalar_typeD DoublePrecision2;
};
#endif
template<> struct GridTypeMapper<ComplexD2> : public GridTypeMapper_Base { template<> struct GridTypeMapper<ComplexD2> : public GridTypeMapper_Base {
typedef ComplexD2 scalar_type; typedef ComplexD2 scalar_type;
typedef ComplexD2 scalar_typeD; typedef ComplexD2 scalar_typeD;

View File

@ -16,40 +16,53 @@ void acceleratorInit(void)
char * localRankStr = NULL; char * localRankStr = NULL;
int rank = 0, world_rank=0; int rank = 0, world_rank=0;
#define ENV_LOCAL_RANK_OMPI "OMPI_COMM_WORLD_LOCAL_RANK" #define ENV_LOCAL_RANK_OMPI "OMPI_COMM_WORLD_LOCAL_RANK"
#define ENV_LOCAL_RANK_MVAPICH "MV2_COMM_WORLD_LOCAL_RANK"
#define ENV_RANK_OMPI "OMPI_COMM_WORLD_RANK" #define ENV_RANK_OMPI "OMPI_COMM_WORLD_RANK"
#define ENV_LOCAL_RANK_SLURM "SLURM_LOCALID"
#define ENV_RANK_SLURM "SLURM_PROCID"
#define ENV_LOCAL_RANK_MVAPICH "MV2_COMM_WORLD_LOCAL_RANK"
#define ENV_RANK_MVAPICH "MV2_COMM_WORLD_RANK" #define ENV_RANK_MVAPICH "MV2_COMM_WORLD_RANK"
// We extract the local rank initialization using an environment variable // We extract the local rank initialization using an environment variable
if ((localRankStr = getenv(ENV_LOCAL_RANK_OMPI)) != NULL) if ((localRankStr = getenv(ENV_LOCAL_RANK_OMPI)) != NULL) {
{ printf("OPENMPI detected\n");
rank = atoi(localRankStr); rank = atoi(localRankStr);
} } else if ((localRankStr = getenv(ENV_LOCAL_RANK_MVAPICH)) != NULL) {
if ((localRankStr = getenv(ENV_LOCAL_RANK_MVAPICH)) != NULL) printf("MVAPICH detected\n");
{
rank = atoi(localRankStr); rank = atoi(localRankStr);
} else if ((localRankStr = getenv(ENV_LOCAL_RANK_SLURM)) != NULL) {
printf("SLURM detected\n");
rank = atoi(localRankStr);
} else {
printf("MPI version is unknown - bad things may happen\n");
} }
if ((localRankStr = getenv(ENV_RANK_OMPI )) != NULL) { world_rank = atoi(localRankStr);} if ((localRankStr = getenv(ENV_RANK_OMPI )) != NULL) { world_rank = atoi(localRankStr);}
if ((localRankStr = getenv(ENV_RANK_MVAPICH)) != NULL) { world_rank = atoi(localRankStr);} if ((localRankStr = getenv(ENV_RANK_MVAPICH)) != NULL) { world_rank = atoi(localRankStr);}
if ((localRankStr = getenv(ENV_RANK_SLURM )) != NULL) { world_rank = atoi(localRankStr);}
size_t totalDeviceMem=0; size_t totalDeviceMem=0;
for (int i = 0; i < nDevices; i++) { for (int i = 0; i < nDevices; i++) {
#define GPU_PROP_FMT(canMapHostMemory,FMT) printf("AcceleratorCudaInit: " #canMapHostMemory ": " FMT" \n",prop.canMapHostMemory); #define GPU_PROP_FMT(canMapHostMemory,FMT) printf("AcceleratorCudaInit[%d]: " #canMapHostMemory ": " FMT" \n",rank,prop.canMapHostMemory);
#define GPU_PROP(canMapHostMemory) GPU_PROP_FMT(canMapHostMemory,"%d"); #define GPU_PROP(canMapHostMemory) GPU_PROP_FMT(canMapHostMemory,"%d");
cudaGetDeviceProperties(&gpu_props[i], i); cudaGetDeviceProperties(&gpu_props[i], i);
cudaDeviceProp prop; cudaDeviceProp prop;
prop = gpu_props[i]; prop = gpu_props[i];
totalDeviceMem = prop.totalGlobalMem; totalDeviceMem = prop.totalGlobalMem;
if ( world_rank == 0) { if ( world_rank == 0) {
printf("AcceleratorCudaInit: ========================\n"); #ifndef GRID_IBM_SUMMIT
printf("AcceleratorCudaInit: Device Number : %d\n", i); if ( i==rank ) {
printf("AcceleratorCudaInit: ========================\n"); printf("AcceleratorCudaInit[%d]: ========================\n",rank);
printf("AcceleratorCudaInit: Device identifier: %s\n", prop.name); printf("AcceleratorCudaInit[%d]: Device Number : %d\n", rank,i);
printf("AcceleratorCudaInit[%d]: ========================\n",rank);
printf("AcceleratorCudaInit[%d]: Device identifier: %s\n",rank, prop.name);
GPU_PROP_FMT(totalGlobalMem,"%lld"); GPU_PROP_FMT(totalGlobalMem,"%lld");
GPU_PROP(managedMemory); GPU_PROP(managedMemory);
GPU_PROP(isMultiGpuBoard); GPU_PROP(isMultiGpuBoard);
GPU_PROP(warpSize); GPU_PROP(warpSize);
GPU_PROP(pciBusID);
GPU_PROP(pciDeviceID);
}
#endif
// GPU_PROP(unifiedAddressing); // GPU_PROP(unifiedAddressing);
// GPU_PROP(l2CacheSize); // GPU_PROP(l2CacheSize);
// GPU_PROP(singleToDoublePrecisionPerfRatio); // GPU_PROP(singleToDoublePrecisionPerfRatio);
@ -61,9 +74,9 @@ void acceleratorInit(void)
#ifdef GRID_IBM_SUMMIT #ifdef GRID_IBM_SUMMIT
// IBM Jsrun makes cuda Device numbering screwy and not match rank // IBM Jsrun makes cuda Device numbering screwy and not match rank
if ( world_rank == 0 ) printf("AcceleratorCudaInit: IBM Summit or similar - NOT setting device to node rank\n"); if ( world_rank == 0 ) printf("AcceleratorCudaInit: IBM Summit or similar - use default device\n");
#else #else
if ( world_rank == 0 ) printf("AcceleratorCudaInit: setting device to node rank\n"); printf("AcceleratorCudaInit: rank %d setting device to node rank %d\n",world_rank,rank);
cudaSetDevice(rank); cudaSetDevice(rank);
#endif #endif
if ( world_rank == 0 ) printf("AcceleratorCudaInit: ================================================\n"); if ( world_rank == 0 ) printf("AcceleratorCudaInit: ================================================\n");

View File

@ -70,6 +70,7 @@ NAMESPACE_BEGIN(Grid);
// //
// Memory management: // Memory management:
// //
// int acceleratorIsCommunicable(void *pointer);
// void *acceleratorAllocShared(size_t bytes); // void *acceleratorAllocShared(size_t bytes);
// void acceleratorFreeShared(void *ptr); // void acceleratorFreeShared(void *ptr);
// //
@ -90,6 +91,7 @@ void acceleratorInit(void);
////////////////////////////////////////////// //////////////////////////////////////////////
#ifdef GRID_CUDA #ifdef GRID_CUDA
#include <cuda.h>
#ifdef __CUDA_ARCH__ #ifdef __CUDA_ARCH__
#define GRID_SIMT #define GRID_SIMT
@ -149,6 +151,9 @@ inline void *acceleratorAllocShared(size_t bytes)
ptr = (void *) NULL; ptr = (void *) NULL;
printf(" cudaMallocManaged failed for %d %s \n",bytes,cudaGetErrorString(err)); printf(" cudaMallocManaged failed for %d %s \n",bytes,cudaGetErrorString(err));
} }
// size_t free,total;
// cudaMemGetInfo(&free,&total);
// std::cout << "Malloc managed "<<bytes<<" "<<free<<"/"<<total<<std::endl;
return ptr; return ptr;
}; };
inline void *acceleratorAllocDevice(size_t bytes) inline void *acceleratorAllocDevice(size_t bytes)
@ -159,12 +164,25 @@ inline void *acceleratorAllocDevice(size_t bytes)
ptr = (void *) NULL; ptr = (void *) NULL;
printf(" cudaMalloc failed for %d %s \n",bytes,cudaGetErrorString(err)); printf(" cudaMalloc failed for %d %s \n",bytes,cudaGetErrorString(err));
} }
// size_t free,total;
// cudaMemGetInfo(&free,&total);
// std::cout << "Malloc device "<<bytes<<" "<<free<<"/"<<total<<std::endl;
return ptr; return ptr;
}; };
inline void acceleratorFreeShared(void *ptr){ cudaFree(ptr);}; inline void acceleratorFreeShared(void *ptr){ cudaFree(ptr);};
inline void acceleratorFreeDevice(void *ptr){ cudaFree(ptr);}; inline void acceleratorFreeDevice(void *ptr){ cudaFree(ptr);};
inline void acceleratorCopyToDevice(void *from,void *to,size_t bytes) { cudaMemcpy(to,from,bytes, cudaMemcpyHostToDevice);} inline void acceleratorCopyToDevice(void *from,void *to,size_t bytes) { cudaMemcpy(to,from,bytes, cudaMemcpyHostToDevice);}
inline void acceleratorCopyFromDevice(void *from,void *to,size_t bytes){ cudaMemcpy(to,from,bytes, cudaMemcpyDeviceToHost);} inline void acceleratorCopyFromDevice(void *from,void *to,size_t bytes){ cudaMemcpy(to,from,bytes, cudaMemcpyDeviceToHost);}
inline int acceleratorIsCommunicable(void *ptr)
{
int uvm;
auto
cuerr = cuPointerGetAttribute( &uvm, CU_POINTER_ATTRIBUTE_IS_MANAGED, (CUdeviceptr) ptr);
assert(cuerr == cudaSuccess );
if(uvm) return 0;
else return 1;
}
#endif #endif
////////////////////////////////////////////// //////////////////////////////////////////////
@ -219,6 +237,15 @@ inline void acceleratorFreeShared(void *ptr){free(ptr,*theGridAccelerator);};
inline void acceleratorFreeDevice(void *ptr){free(ptr,*theGridAccelerator);}; inline void acceleratorFreeDevice(void *ptr){free(ptr,*theGridAccelerator);};
inline void acceleratorCopyToDevice(void *from,void *to,size_t bytes) { theGridAccelerator->memcpy(to,from,bytes); theGridAccelerator->wait();} inline void acceleratorCopyToDevice(void *from,void *to,size_t bytes) { theGridAccelerator->memcpy(to,from,bytes); theGridAccelerator->wait();}
inline void acceleratorCopyFromDevice(void *from,void *to,size_t bytes){ theGridAccelerator->memcpy(to,from,bytes); theGridAccelerator->wait();} inline void acceleratorCopyFromDevice(void *from,void *to,size_t bytes){ theGridAccelerator->memcpy(to,from,bytes); theGridAccelerator->wait();}
inline int acceleratorIsCommunicable(void *ptr)
{
#if 0
auto uvm = cl::sycl::usm::get_pointer_type(ptr, theGridAccelerator->get_context());
if ( uvm = cl::sycl::usm::alloc::shared ) return 1;
else return 0;
#endif
return 1;
}
#endif #endif
@ -298,6 +325,7 @@ inline void *acceleratorAllocShared(size_t bytes)
return malloc(bytes); return malloc(bytes);
#endif #endif
}; };
inline int acceleratorIsCommunicable(void *ptr){ return 1; }
inline void *acceleratorAllocDevice(size_t bytes) inline void *acceleratorAllocDevice(size_t bytes)
{ {
@ -352,6 +380,7 @@ accelerator_inline int acceleratorSIMTlane(int Nsimd) { return 0; } // CUDA spec
inline void acceleratorCopyToDevice(void *from,void *to,size_t bytes) { memcpy(to,from,bytes);} inline void acceleratorCopyToDevice(void *from,void *to,size_t bytes) { memcpy(to,from,bytes);}
inline void acceleratorCopyFromDevice(void *from,void *to,size_t bytes){ memcpy(to,from,bytes);} inline void acceleratorCopyFromDevice(void *from,void *to,size_t bytes){ memcpy(to,from,bytes);}
inline int acceleratorIsCommunicable(void *ptr){ return 1; }
#ifdef HAVE_MM_MALLOC_H #ifdef HAVE_MM_MALLOC_H
inline void *acceleratorAllocShared(size_t bytes){return _mm_malloc(bytes,GRID_ALLOC_ALIGN);}; inline void *acceleratorAllocShared(size_t bytes){return _mm_malloc(bytes,GRID_ALLOC_ALIGN);};
inline void *acceleratorAllocDevice(size_t bytes){return _mm_malloc(bytes,GRID_ALLOC_ALIGN);}; inline void *acceleratorAllocDevice(size_t bytes){return _mm_malloc(bytes,GRID_ALLOC_ALIGN);};

View File

@ -99,10 +99,10 @@ inline std::ostream & operator<<(std::ostream &os, const AcceleratorVector<T,_nd
{ {
os << "["; os << "[";
for(int s=0;s<v.size();s++) { for(int s=0;s<v.size();s++) {
os << v[s] << " "; os << v[s];
if( s < (v.size()-1) ){
os << " ";
} }
if (v.size() > 0) {
os << "\b";
} }
os << "]"; os << "]";
return os; return os;

View File

@ -318,6 +318,13 @@ void Grid_init(int *argc,char ***argv)
Grid_debug_handler_init(); Grid_debug_handler_init();
} }
#if defined(A64FX)
if( GridCmdOptionExists(*argv,*argv+*argc,"--comms-overlap") ){
std::cout << "Option --comms-overlap currently not supported on QPACE4. Exiting." << std::endl;
exit(EXIT_FAILURE);
}
#endif
////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////
// Memory manager // Memory manager
////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////
@ -370,9 +377,7 @@ void Grid_init(int *argc,char ***argv)
std::cout << GridLogMessage << "Mapped stencil comms buffers as MAP_HUGETLB "<<std::endl; std::cout << GridLogMessage << "Mapped stencil comms buffers as MAP_HUGETLB "<<std::endl;
} }
#ifndef GRID_UVM MemoryManager::InitMessage();
std::cout << GridLogMessage << "MemoryManager Cache "<< MemoryManager::DeviceMaxBytes <<" bytes "<<std::endl;
#endif
if( GridCmdOptionExists(*argv,*argv+*argc,"--debug-mem") ){ if( GridCmdOptionExists(*argv,*argv+*argc,"--debug-mem") ){
MemoryProfiler::debug = true; MemoryProfiler::debug = true;
@ -467,7 +472,7 @@ void Grid_init(int *argc,char ***argv)
if( GridCmdOptionExists(*argv,*argv+*argc,"--lebesgue") ){ if( GridCmdOptionExists(*argv,*argv+*argc,"--lebesgue") ){
LebesgueOrder::UseLebesgueOrder=1; LebesgueOrder::UseLebesgueOrder=1;
} }
CartesianCommunicator::nCommThreads = -1; CartesianCommunicator::nCommThreads = 1;
if( GridCmdOptionExists(*argv,*argv+*argc,"--comms-threads") ){ if( GridCmdOptionExists(*argv,*argv+*argc,"--comms-threads") ){
arg= GridCmdOptionPayload(*argv,*argv+*argc,"--comms-threads"); arg= GridCmdOptionPayload(*argv,*argv+*argc,"--comms-threads");
GridCmdOptionInt(arg,CartesianCommunicator::nCommThreads); GridCmdOptionInt(arg,CartesianCommunicator::nCommThreads);

89
SVE_README.txt Normal file
View File

@ -0,0 +1,89 @@
* gcc 10.1 prebuild, QPACE4 interactive login w/ MPI
scl enable gcc-toolset-10 bash
module load mpi/openmpi-aarch64
../configure --enable-simd=A64FX --enable-comms=mpi3 --enable-shm=shmget CXX=mpicxx CC=mpicc
================================== deprecated ================================================
* gcc 10.1 prebuild, QPACE4 interactive login
scl enable gcc-toolset-10 bash
../configure --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=none --enable-openmp CXX=g++ CC=gcc CXXFLAGS="-std=c++11 -march=armv8-a+sve -msve-vector-bits=512 -fno-gcse -DA64FXFIXEDSIZE -DA64FXASM -DDSLASHINTRIN"
* gcc 10.1 prebuild w/ MPI, QPACE4 interactive login
scl enable gcc-toolset-10 bash
module load mpi/openmpi-aarch64
../configure --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=mpi-auto --enable-shm=shmget --enable-openmp CXX=mpicxx CC=mpicc CXXFLAGS="-std=c++11 -march=armv8-a+sve -msve-vector-bits=512 -fno-gcse -DA64FXFIXEDSIZE -DA64FXASM -DDSLASHINTRIN"
------------------------------------------------------------------------------
* armclang 20.2 (qp4)
../configure --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=none --enable-openmp CXX=armclang++ CC=armclang CXXFLAGS="-std=c++11 -mcpu=a64fx -DA64FX -DARMCLANGCOMPAT -DA64FXASM -DDSLASHINTRIN"
------------------------------------------------------------------------------
* gcc 10.0.1 VLA (merlin)
../configure --with-lime=/home/men04359/lime/c-lime --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=none --enable-openmp CXX=g++-10.0.1 CC=gcc-10.0.1 CXXFLAGS="-std=c++11 -march=armv8-a+sve -msve-vector-bits=512 -fno-gcse -DA64FX -DA64FXASM -DDSLASHINTRIN" LDFLAGS=-static GRID_LDFLAGS=-static MPI_CXXLDFLAGS=-static
* gcc 10.0.1 fixed-size ACLE (merlin)
../configure --with-lime=/home/men04359/lime/c-lime --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=none --enable-openmp CXX=g++-10.0.1 CC=gcc-10.0.1 CXXFLAGS="-std=c++11 -march=armv8-a+sve -msve-vector-bits=512 -fno-gcse -DA64FXFIXEDSIZE -DA64FXASM -DDSLASHINTRIN"
* gcc 10.0.1 fixed-size ACLE (fjt) w/ MPI
export OMPI_CC=gcc-10.0.1
export OMPI_CXX=g++-10.0.1
export MPICH_CC=gcc-10.0.1
export MPICH_CXX=g++-10.0.1
$ ../configure --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=mpi3 --enable-openmp CXX=mpiFCC CC=mpifcc CXXFLAGS="-std=c++11 -march=armv8-a+sve -msve-vector-bits=512 -fno-gcse -DA64FXFIXEDSIZE -DA64FXASM -DDSLASHINTRIN -DTOFU -I/opt/FJSVxtclanga/tcsds-1.2.25/include/mpi/fujitsu -lrt" LDFLAGS="-L/opt/FJSVxtclanga/tcsds-1.2.25/lib64 -lrt"
--------------------------------------------------------
* armclang 20.0 VLA (merlin)
../configure --with-lime=/home/men04359/lime/c-lime --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=none --enable-openmp CXX=armclang++ CC=armclang CXXFLAGS="-std=c++11 -fno-unroll-loops -mllvm -vectorizer-min-trip-count=2 -march=armv8-a+sve -DARMCLANGCOMPAT -DA64FX -DA64FXASM -DDSLASHINTRIN" LDFLAGS=-static GRID_LDFLAGS=-static MPI_CXXLDFLAGS=-static
TODO check ARMCLANGCOMPAT
* armclang 20.1 VLA (merlin)
../configure --with-lime=/home/men04359/lime/c-lime --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=none --enable-openmp CXX=armclang++ CC=armclang CXXFLAGS="-std=c++11 -mcpu=a64fx -DARMCLANGCOMPAT -DA64FX -DA64FXASM -DDSLASHINTRIN" LDFLAGS=-static GRID_LDFLAGS=-static MPI_CXXLDFLAGS=-static
TODO check ARMCLANGCOMPAT
* armclang 20.1 VLA (fjt cluster)
../configure --with-lime=$HOME/local --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=none --enable-openmp CXX=armclang++ CC=armclang CXXFLAGS="-std=c++11 -mcpu=a64fx -DARMCLANGCOMPAT -DA64FX -DA64FXASM -DDSLASHINTRIN -DTOFU"
TODO check ARMCLANGCOMPAT
* armclang 20.1 VLA w/MPI (fjt cluster)
../configure --with-lime=$HOME/local --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=mpi3 --enable-openmp CXX=mpiFCC CC=mpifcc CXXFLAGS="-std=c++11 -mcpu=a64fx -DA64FX -DA64FXASM -DDSLASHINTRIN -DTOFU -I/opt/FJSVxtclanga/tcsds-1.2.25/include/mpi/fujitsu -lrt" LDFLAGS="-L/opt/FJSVxtclanga/tcsds-1.2.25/lib64"
No ARMCLANGCOMPAT -> still correct ?
--------------------------------------------------------
* Fujitsu fcc
../configure --with-lime=$HOME/grid-a64fx/lime/c-lime --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=none --enable-openmp --with-mpfr=/home/users/gre/gre-1/grid-a64fx/mpfr-build/install CXX=FCC CC=fcc CXXFLAGS="-Nclang -Kfast -DA64FX -DA64FXASM -DDSLASHINTRIN"
* Fujitsu fcc w/ MPI
../configure --with-lime=$HOME/grid-a64fx/lime/c-lime --without-hdf5 --enable-gen-simd-width=64 --enable-simd=GEN --enable-precision=double --enable-comms=mpi --enable-openmp --with-mpfr=/home/users/gre/gre-1/grid-a64fx/mpfr-build/install CXX=mpiFCC CC=mpifcc CXXFLAGS="-Nclang -Kfast -DA64FX -DA64FXASM -DDSLASHINTRIN -DTOFU"

View File

@ -74,90 +74,6 @@ int main (int argc, char ** argv)
std::vector<double> t_time(Nloop); std::vector<double> t_time(Nloop);
time_statistics timestat; time_statistics timestat;
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << "= Benchmarking concurrent halo exchange in "<<nmu<<" dimensions"<<std::endl;
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
header();
for(int lat=8;lat<=maxlat;lat+=4){
for(int Ls=8;Ls<=8;Ls*=2){
Coordinate latt_size ({lat*mpi_layout[0],
lat*mpi_layout[1],
lat*mpi_layout[2],
lat*mpi_layout[3]});
GridCartesian Grid(latt_size,simd_layout,mpi_layout);
RealD Nrank = Grid._Nprocessors;
RealD Nnode = Grid.NodeCount();
RealD ppn = Nrank/Nnode;
std::vector<Vector<HalfSpinColourVectorD> > xbuf(8);
std::vector<Vector<HalfSpinColourVectorD> > rbuf(8);
int ncomm;
int bytes=lat*lat*lat*Ls*sizeof(HalfSpinColourVectorD);
for(int mu=0;mu<8;mu++){
xbuf[mu].resize(lat*lat*lat*Ls);
rbuf[mu].resize(lat*lat*lat*Ls);
// std::cout << " buffers " << std::hex << (uint64_t)&xbuf[mu][0] <<" " << (uint64_t)&rbuf[mu][0] <<std::endl;
}
for(int i=0;i<Nloop;i++){
double start=usecond();
std::vector<CommsRequest_t> requests;
ncomm=0;
for(int mu=0;mu<4;mu++){
if (mpi_layout[mu]>1 ) {
ncomm++;
int comm_proc=1;
int xmit_to_rank;
int recv_from_rank;
Grid.ShiftedRanks(mu,comm_proc,xmit_to_rank,recv_from_rank);
Grid.SendToRecvFromBegin(requests,
(void *)&xbuf[mu][0],
xmit_to_rank,
(void *)&rbuf[mu][0],
recv_from_rank,
bytes);
comm_proc = mpi_layout[mu]-1;
Grid.ShiftedRanks(mu,comm_proc,xmit_to_rank,recv_from_rank);
Grid.SendToRecvFromBegin(requests,
(void *)&xbuf[mu+4][0],
xmit_to_rank,
(void *)&rbuf[mu+4][0],
recv_from_rank,
bytes);
}
}
Grid.SendToRecvFromComplete(requests);
Grid.Barrier();
double stop=usecond();
t_time[i] = stop-start; // microseconds
}
timestat.statistics(t_time);
double dbytes = bytes*ppn;
double xbytes = dbytes*2.0*ncomm;
double rbytes = xbytes;
double bidibytes = xbytes+rbytes;
std::cout<<GridLogMessage << std::setw(4) << lat<<"\t"<<Ls<<"\t"
<<std::setw(11) << bytes<< std::fixed << std::setprecision(1) << std::setw(7)
<<std::right<< xbytes/timestat.mean<<" "<< xbytes*timestat.err/(timestat.mean*timestat.mean)<< " "
<<xbytes/timestat.max <<" "<< xbytes/timestat.min
<< "\t\t"<<std::setw(7)<< bidibytes/timestat.mean<< " " << bidibytes*timestat.err/(timestat.mean*timestat.mean) << " "
<< bidibytes/timestat.max << " " << bidibytes/timestat.min << std::endl;
}
}
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl; std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
@ -206,26 +122,22 @@ int main (int argc, char ** argv)
{ {
std::vector<CommsRequest_t> requests; std::vector<CommsRequest_t> requests;
Grid.ShiftedRanks(mu,comm_proc,xmit_to_rank,recv_from_rank); Grid.ShiftedRanks(mu,comm_proc,xmit_to_rank,recv_from_rank);
Grid.SendToRecvFromBegin(requests, Grid.SendToRecvFrom((void *)&xbuf[mu][0],
(void *)&xbuf[mu][0],
xmit_to_rank, xmit_to_rank,
(void *)&rbuf[mu][0], (void *)&rbuf[mu][0],
recv_from_rank, recv_from_rank,
bytes); bytes);
Grid.SendToRecvFromComplete(requests);
} }
comm_proc = mpi_layout[mu]-1; comm_proc = mpi_layout[mu]-1;
{ {
std::vector<CommsRequest_t> requests; std::vector<CommsRequest_t> requests;
Grid.ShiftedRanks(mu,comm_proc,xmit_to_rank,recv_from_rank); Grid.ShiftedRanks(mu,comm_proc,xmit_to_rank,recv_from_rank);
Grid.SendToRecvFromBegin(requests, Grid.SendToRecvFrom((void *)&xbuf[mu+4][0],
(void *)&xbuf[mu+4][0],
xmit_to_rank, xmit_to_rank,
(void *)&rbuf[mu+4][0], (void *)&rbuf[mu+4][0],
recv_from_rank, recv_from_rank,
bytes); bytes);
Grid.SendToRecvFromComplete(requests);
} }
} }
} }

View File

@ -201,20 +201,34 @@ int main (int argc, char ** argv)
double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu]; double volume=Ls; for(int mu=0;mu<Nd;mu++) volume=volume*latt4[mu];
double flops=single_site_flops*volume*ncall; double flops=single_site_flops*volume*ncall;
auto nsimd = vComplex::Nsimd();
auto simdwidth = sizeof(vComplex);
// RF: Nd Wilson * Ls, Nd gauge * Ls, Nc colors
double data_rf = volume * ((2*Nd+1)*Nd*Nc + 2*Nd*Nc*Nc) * simdwidth / nsimd * ncall / (1024.*1024.*1024.);
// mem: Nd Wilson * Ls, Nd gauge, Nc colors
double data_mem = (volume * (2*Nd+1)*Nd*Nc + (volume/Ls) *2*Nd*Nc*Nc) * simdwidth / nsimd * ncall / (1024.*1024.*1024.);
std::cout<<GridLogMessage << "Called Dw "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl; std::cout<<GridLogMessage << "Called Dw "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl;
// std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl; // std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl;
// std::cout<<GridLogMessage << "norm ref "<< norm2(ref)<<std::endl; // std::cout<<GridLogMessage << "norm ref "<< norm2(ref)<<std::endl;
std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl; std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl;
std::cout<<GridLogMessage << "mflop/s per rank = "<< flops/(t1-t0)/NP<<std::endl; std::cout<<GridLogMessage << "mflop/s per rank = "<< flops/(t1-t0)/NP<<std::endl;
std::cout<<GridLogMessage << "mflop/s per node = "<< flops/(t1-t0)/NN<<std::endl; std::cout<<GridLogMessage << "mflop/s per node = "<< flops/(t1-t0)/NN<<std::endl;
std::cout<<GridLogMessage << "RF GiB/s (base 2) = "<< 1000000. * data_rf/((t1-t0))<<std::endl;
std::cout<<GridLogMessage << "mem GiB/s (base 2) = "<< 1000000. * data_mem/((t1-t0))<<std::endl;
err = ref-result; err = ref-result;
std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl; std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl;
//exit(0); //exit(0);
if(( norm2(err)>1.0e-4) ) { if(( norm2(err)>1.0e-4) ) {
/*
std::cout << "RESULT\n " << result<<std::endl; std::cout << "RESULT\n " << result<<std::endl;
std::cout << "REF \n " << ref <<std::endl; std::cout << "REF \n " << ref <<std::endl;
std::cout << "ERR \n " << err <<std::endl; std::cout << "ERR \n " << err <<std::endl;
*/
std::cout<<GridLogMessage << "WRONG RESULT" << std::endl;
FGrid->Barrier(); FGrid->Barrier();
exit(-1); exit(-1);
} }
@ -286,9 +300,11 @@ int main (int argc, char ** argv)
err = ref-result; err = ref-result;
std::cout<<GridLogMessage << "norm dag diff "<< norm2(err)<<std::endl; std::cout<<GridLogMessage << "norm dag diff "<< norm2(err)<<std::endl;
if((norm2(err)>1.0e-4)){ if((norm2(err)>1.0e-4)){
/*
std::cout<< "DAG RESULT\n " <<ref << std::endl; std::cout<< "DAG RESULT\n " <<ref << std::endl;
std::cout<< "DAG sRESULT\n " <<result << std::endl; std::cout<< "DAG sRESULT\n " <<result << std::endl;
std::cout<< "DAG ERR \n " << err <<std::endl; std::cout<< "DAG ERR \n " << err <<std::endl;
*/
} }
LatticeFermion src_e (FrbGrid); LatticeFermion src_e (FrbGrid);
LatticeFermion src_o (FrbGrid); LatticeFermion src_o (FrbGrid);
@ -358,9 +374,11 @@ int main (int argc, char ** argv)
err = r_eo-result; err = r_eo-result;
std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl; std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl;
if((norm2(err)>1.0e-4)){ if((norm2(err)>1.0e-4)){
/*
std::cout<< "Deo RESULT\n " <<r_eo << std::endl; std::cout<< "Deo RESULT\n " <<r_eo << std::endl;
std::cout<< "Deo REF\n " <<result << std::endl; std::cout<< "Deo REF\n " <<result << std::endl;
std::cout<< "Deo ERR \n " << err <<std::endl; std::cout<< "Deo ERR \n " << err <<std::endl;
*/
} }
pickCheckerboard(Even,src_e,err); pickCheckerboard(Even,src_e,err);
@ -373,4 +391,3 @@ int main (int argc, char ** argv)
Grid_finalize(); Grid_finalize();
exit(0); exit(0);
} }

View File

@ -191,7 +191,7 @@ int main (int argc, char ** argv)
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl; std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << "= Benchmarking SU3xSU3 CovShiftForward(z,x,y)"<<std::endl; std::cout<<GridLogMessage << "= Benchmarking SU3xSU3 CovShiftForward(z,x,y)"<<std::endl;
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl; std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GFlop/s"<<std::endl; std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GB/s (incl Cshift)\t\t GFlop/s"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl; std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
for(int lat=LMIN;lat<=LMAX;lat+=LADD){ for(int lat=LMIN;lat<=LMAX;lat+=LADD){
@ -216,15 +216,16 @@ int main (int argc, char ** argv)
double bytes=3*vol*Nc*Nc*sizeof(Complex); double bytes=3*vol*Nc*Nc*sizeof(Complex);
double ncbytes=5*vol*Nc*Nc*sizeof(Complex);
double flops=Nc*Nc*(6+8+8)*vol; double flops=Nc*Nc*(6+8+8)*vol;
std::cout<<GridLogMessage<<std::setprecision(3) << lat<<"\t\t"<<bytes<<" \t\t"<<bytes/time<<"\t\t" << flops/time<<std::endl; std::cout<<GridLogMessage<<std::setprecision(3) << lat<<"\t\t"<<bytes<<" \t\t"<<bytes/time<<"\t\t"<<ncbytes/time<<"\t\t" << flops/time<<std::endl;
} }
} }
#if 1 #if 1
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl; std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << "= Benchmarking SU3xSU3 z= x * Cshift(y)"<<std::endl; std::cout<<GridLogMessage << "= Benchmarking SU3xSU3 z= x * Cshift(y)"<<std::endl;
std::cout<<GridLogMessage << "===================================================================================================="<<std::endl; std::cout<<GridLogMessage << "===================================================================================================="<<std::endl;
std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GFlop/s"<<std::endl; std::cout<<GridLogMessage << " L "<<"\t\t"<<"bytes"<<"\t\t\t"<<"GB/s\t\t GB/s (incl Cshift)\t\t GFlop/s"<<std::endl;
std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl; std::cout<<GridLogMessage << "----------------------------------------------------------"<<std::endl;
for(int lat=LMIN;lat<=LMAX;lat+=LADD){ for(int lat=LMIN;lat<=LMAX;lat+=LADD){
@ -258,10 +259,11 @@ int main (int argc, char ** argv)
tmult = tmult /Nloop; tmult = tmult /Nloop;
double bytes=3*vol*Nc*Nc*sizeof(Complex); double bytes=3*vol*Nc*Nc*sizeof(Complex);
double ncbytes=5*vol*Nc*Nc*sizeof(Complex);
double flops=Nc*Nc*(6+8+8)*vol; double flops=Nc*Nc*(6+8+8)*vol;
std::cout<<GridLogMessage<<std::setprecision(3) << "total us "<<time<<" shift "<<tshift <<" mult "<<tmult<<std::endl; std::cout<<GridLogMessage<<std::setprecision(3) << "total us "<<time<<" shift "<<tshift <<" mult "<<tmult<<std::endl;
time = time * 1000; // convert to NS for GB/s time = time * 1000; // convert to NS for GB/s
std::cout<<GridLogMessage<<std::setprecision(3) << lat<<"\t\t"<<bytes<<" \t\t"<<bytes/time<<"\t\t" << flops/time<<std::endl; std::cout<<GridLogMessage<<std::setprecision(3) << lat<<"\t\t"<<bytes<<" \t\t"<<bytes/time<<"\t\t" <<ncbytes/time<<"\t\t" << flops/time<<std::endl;
} }
} }
#endif #endif

View File

@ -152,10 +152,20 @@ int main (int argc, char ** argv)
std::cout<<GridLogMessage << "Calling Dw"<<std::endl; std::cout<<GridLogMessage << "Calling Dw"<<std::endl;
int ncall=1000; int ncall=1000;
//int ncall=1;
// Counters
Dw.ZeroCounters();
Grid.Barrier();
double t0=usecond(); double t0=usecond();
for(int i=0;i<ncall;i++){ for(int i=0;i<ncall;i++){
Dw.Dhop(src,result,0); Dw.Dhop(src,result,0);
} }
// Counters
Grid.Barrier();
double t1=usecond(); double t1=usecond();
double flops=single_site_flops*volume*ncall; double flops=single_site_flops*volume*ncall;
@ -173,15 +183,28 @@ int main (int argc, char ** argv)
} }
auto nsimd = vComplex::Nsimd();
auto simdwidth = sizeof(vComplex);
std::cout<<GridLogMessage << "Nsimd "<< nsimd << std::endl;
std::cout<<GridLogMessage << "Simd width "<< simdwidth << std::endl;
// RF: Nd Wilson, Nd gauge, Nc colors
double data = volume * ((2*Nd+1)*Nd*Nc + 2*Nd*Nc*Nc) * simdwidth / nsimd * ncall / (1024.*1024.*1024.);
std::cout<<GridLogMessage << "Called Dw"<<std::endl; std::cout<<GridLogMessage << "Called Dw"<<std::endl;
std::cout<<GridLogMessage << "flops per site " << single_site_flops << std::endl; std::cout<<GridLogMessage << "flops per site " << single_site_flops << std::endl;
std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl; std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl;
std::cout<<GridLogMessage << "norm ref "<< norm2(ref)<<std::endl; std::cout<<GridLogMessage << "norm ref "<< norm2(ref)<<std::endl;
std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl; std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl;
std::cout<<GridLogMessage << "RF GiB/s (base 2) = "<< 1000000. * data/(t1-t0)<<std::endl;
err = ref-result; err = ref-result;
std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl; std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl;
Dw.Report();
// guard
double err0 = norm2(err);
// for(int ss=0;ss<10;ss++ ){ // for(int ss=0;ss<10;ss++ ){
for(int ss=0;ss<0;ss++ ){ for(int ss=0;ss<0;ss++ ){
@ -230,5 +253,10 @@ int main (int argc, char ** argv)
err = ref-result; err = ref-result;
std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl; std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl;
// guard
double err1 = norm2(err);
assert(fabs(err0) < 1.0e-3);
assert(fabs(err1) < 1.0e-3);
Grid_finalize(); Grid_finalize();
} }

View File

@ -89,6 +89,7 @@ int main (int argc, char ** argv)
std::cout << GridLogMessage; std::cout << GridLogMessage;
std::cout << latt_size; std::cout << latt_size;
std::cout << "\t\t";
GridCartesian Grid(latt_size,simd_layout,mpi_layout); GridCartesian Grid(latt_size,simd_layout,mpi_layout);
GridRedBlackCartesian RBGrid(&Grid); GridRedBlackCartesian RBGrid(&Grid);
@ -132,7 +133,10 @@ void bench_wilson (
for(int i=0; i<ncall; i++) { Dw.Dhop(src,result,dag); } for(int i=0; i<ncall; i++) { Dw.Dhop(src,result,dag); }
double t1 = usecond(); double t1 = usecond();
double flops = single_site_flops * volume * ncall; double flops = single_site_flops * volume * ncall;
double data_tp = (volume * 180 * 64 * ncall) / 1000.; // / (1024.*1024.*1024.);
//std::cout << flops/(t1-t0) << " (" << data_tp/(t1-t0) << " MB/s) \t";
std::cout << flops/(t1-t0) << "\t\t"; std::cout << flops/(t1-t0) << "\t\t";
} }
void bench_wilson_eo ( void bench_wilson_eo (

View File

@ -154,6 +154,7 @@ AC_ARG_ENABLE([accelerator],
case ${ac_ACCELERATOR} in case ${ac_ACCELERATOR} in
cuda) cuda)
echo CUDA acceleration echo CUDA acceleration
LIBS="${LIBS} -lcuda"
AC_DEFINE([GRID_CUDA],[1],[Use CUDA offload]);; AC_DEFINE([GRID_CUDA],[1],[Use CUDA offload]);;
sycl) sycl)
echo SYCL acceleration echo SYCL acceleration
@ -323,7 +324,6 @@ case ${CXXTEST} in
# CXXLD="nvcc -v -link" # CXXLD="nvcc -v -link"
CXX="${CXXBASE} -x cu " CXX="${CXXBASE} -x cu "
CXXLD="${CXXBASE} -link" CXXLD="${CXXBASE} -link"
# CXXFLAGS="$CXXFLAGS -Xcompiler -fno-strict-aliasing -Xcompiler -Wno-unusable-partial-specialization --expt-extended-lambda --expt-relaxed-constexpr"
CXXFLAGS="$CXXFLAGS -Xcompiler -fno-strict-aliasing --expt-extended-lambda --expt-relaxed-constexpr" CXXFLAGS="$CXXFLAGS -Xcompiler -fno-strict-aliasing --expt-extended-lambda --expt-relaxed-constexpr"
if test $ac_openmp = yes; then if test $ac_openmp = yes; then
CXXFLAGS="$CXXFLAGS -Xcompiler -fopenmp" CXXFLAGS="$CXXFLAGS -Xcompiler -fopenmp"
@ -392,6 +392,15 @@ case ${ax_cv_cxx_compiler_vendor} in
[generic SIMD vector width (in bytes)]) [generic SIMD vector width (in bytes)])
SIMD_GEN_WIDTH_MSG=" (width= $ac_gen_simd_width)" SIMD_GEN_WIDTH_MSG=" (width= $ac_gen_simd_width)"
SIMD_FLAGS='';; SIMD_FLAGS='';;
A64FX)
case ${ax_cv_cxx_compiler_vendor} in
gnu)
AC_DEFINE([A64FX],[1],[A64FX / 512-bit SVE VLS])
SIMD_FLAGS='-march=armv8.2-a+sve -msve-vector-bits=512 -fno-gcse -DDSLASHINTRIN';;
clang)
AC_DEFINE([A64FX],[1],[A64FX / 512-bit SVE VLA])
SIMD_FLAGS='-mcpu=a64fx -DARMCLANGCOMPAT -DDSLASHINTRIN';;
esac;;
NEONv8) NEONv8)
AC_DEFINE([NEONV8],[1],[ARMv8 NEON]) AC_DEFINE([NEONV8],[1],[ARMv8 NEON])
SIMD_FLAGS='-march=armv8-a';; SIMD_FLAGS='-march=armv8-a';;
@ -474,8 +483,7 @@ case ${ac_SHM} in
LDFLAGS_CPY=$LDFLAGS LDFLAGS_CPY=$LDFLAGS
CXXFLAGS="$AM_CXXFLAGS $CXXFLAGS" CXXFLAGS="$AM_CXXFLAGS $CXXFLAGS"
LDFLAGS="$AM_LDFLAGS $LDFLAGS" LDFLAGS="$AM_LDFLAGS $LDFLAGS"
AC_SEARCH_LIBS([shm_unlink], [rt], [], AC_SEARCH_LIBS([shm_unlink], [rt], [],[AC_MSG_ERROR("no library found for shm_unlink")])
[AC_MSG_ERROR("no library found for shm_unlink")])
CXXFLAGS=$CXXFLAGS_CPY CXXFLAGS=$CXXFLAGS_CPY
LDFLAGS=$LDFLAGS_CPY LDFLAGS=$LDFLAGS_CPY
;; ;;
@ -692,4 +700,3 @@ AC_OUTPUT
echo "" echo ""
cat grid.configure.summary cat grid.configure.summary
echo "" echo ""

Some files were not shown because too many files have changed in this diff Show More