1
0
mirror of https://github.com/paboyle/Grid.git synced 2024-11-15 02:05:37 +00:00
Commit Graph

39 Commits

Author SHA1 Message Date
pretidav
c79606a5dc Test production code wilson clover. Still missing QObs measurement on-the-fly. 2017-11-03 22:46:32 +01:00
pretidav
7b42ac9982 added polyakov loop observable to the hmc 2017-11-02 21:58:16 +01:00
Guido Cossu
f941c4ee18 Clover term force ok 2017-10-29 11:43:33 +00:00
Guido Cossu
cbda4f66e0 Debug of the field strength 2017-10-24 10:20:13 +01:00
Guido Cossu
fde71c3c52 Merge branch 'develop' into feature/clover 2017-08-04 12:19:57 +01:00
paboyle
c85024683e Merge branch 'feature/parallelio' into develop 2017-06-19 01:39:48 +01:00
paboyle
ae39ec85a3 ComplexField defined 2017-06-18 00:12:48 +01:00
Guido Cossu
0de314870d Faster derivative for WilsonGauge 2017-05-26 14:31:49 +01:00
Guido Cossu
f4e8bf2858 Fixing the topological charge. Wilson Flow tested, ok 2017-05-26 12:45:59 +01:00
Guido Cossu
453cf2a1c6 Moving the topological charge outside the HMC related routines 2017-05-02 14:40:12 +01:00
Guido Cossu
8c540333d5 Merge branch 'develop' into feature/hmc_generalise 2017-04-05 14:41:04 +01:00
Guido Cossu
b8ae787b5e Correcting a simple typo 2017-03-30 11:33:15 +01:00
Guido Cossu
1ed69816b9 First steps for the force term 2017-03-30 11:14:27 +01:00
Guido Cossu
5e549ebd8b Adding force terms 2017-03-27 16:43:15 +09:00
Guido Cossu
120fb59978 Adding tests for WilsonFlow classes 2017-03-21 16:11:35 +09:00
Guido Cossu
3d0fe15374 Added topological charge measurement 2017-03-17 16:14:57 +09:00
Guido Cossu
a783282b8b Merge branch 'develop' into feature/hmc_generalise 2016-11-10 18:13:07 +00:00
Azusa Yamaguchi
bca861e112 Note:FFT shoud be GridFFT (Not change yet).
Gauge fix with FFt is added (tests/core)
2016-10-25 14:21:48 +01:00
Guido Cossu
e6acffdfc2 Fixing the plaquette computation 2016-10-21 16:06:34 +01:00
Guido Cossu
392130a537 Working on the 5d 2016-10-21 14:22:25 +01:00
Guido Cossu
74f1ed3bc5 Adding some documentation for HMC 2016-10-19 10:51:13 +01:00
Guido Cossu
3e80947c2b Cleaned up HMC output. Tested smeared HMCs for single precision (OK) 2016-07-05 12:03:54 +01:00
Guido Cossu
9cb90f714e Merge remote-tracking branch 'origin/develop' into temporary-smearing 2016-07-04 17:28:40 +01:00
neo
339be37dba Debugging smeared HMC 2016-04-13 17:00:14 +09:00
paboyle
090e7aa930 Merge remote-tracking branch 'origin/chulwoo-dec12-2015'
Merge Chulwoo's Lanczos related improvements.
Merge Nd!=4 fixes for pure gauge HMC from Evan.
2016-03-08 09:55:14 +00:00
a7251f28c7 Stout smearing compiles (untested) 2016-02-24 03:16:50 +09:00
neo
c1b1b89d17 More on smearing routines, writing APEsmear (dev) 2016-02-19 17:15:27 +09:00
paboyle
aae8bf31a7 Global edit adding copyright and license info to every source file. 2016-01-02 14:51:32 +00:00
paboyle
5a80930dd2 Charge conjugation boundary conditions for gauge fields implemented as a policy
class, changing the nature of covariant Cshifts used in
plaquettes, rectangles and staples.

As a result same code is used for the plaq and rect action independent of the BC type.

Should probably isolate the BC in a separate class that Gimpl takes as a template param.
Do the same with smearing policies.

This would then allow composition of BC with smearing etc....
2016-01-02 13:37:25 +00:00
Azusa Yamaguchi
98de1cbb6a Optimised version of rectangle term staples.
~3.4x faster than the naive.
2015-12-29 19:22:59 +00:00
Azusa Yamaguchi
78c4e862ef Plaq, Rectangle, Iwasaki, Symanzik and DBW2 workign and HMC regresses to http://arxiv.org/pdf/hep-lat/0610075.pdf 2015-12-28 16:38:31 +00:00
Peter Boyle
aa52fdadcc Global edit on HMC sector -- making GaugeField a template parameter and
preparing to pass integrator, smearing, bc's as policy classes to hmc.

Propose to unify "integrator" and integrator algorithm in a base/derived
way to override step. Want to read through ForceGradient to ensure
that abstraction covers the force gradient case.
2015-08-30 12:18:34 +01:00
Peter Boyle
dc814f30da Binary IO file for generic Grid array parallel I/O.
Number of IO MPI tasks can be varied by selecting which
dimensions use parallel IO and which dimensions use Serial send to boss
I/O.

Thus can neck down from, say 1024 nodes = 4x4x8x8 to {1,8,32,64,128,256,1024} nodes
doing the I/O.

Interpolates nicely between ALL nodes write their data, a single boss per time-plane
in processor space [old UKQCD fortran code did this], and a single node doing all I/O.

Not sure I have the transfer sizes big enough and am not overly convinced fstream
is guaranteed to not give buffer inconsistencies unless I set streambuf size to zero.

Practically it has worked on 8 tasks, 2x1x2x2 writing /cloning NERSC configurations
on my MacOS + OpenMPI and Clang environment.

It is VERY easy to switch to pwrite at a later date, and also easy to send x-strips around from
each node in order to gather bigger chunks at the syscall level.

That would push us up to the circa 8x 18*4*8 == 4KB size write chunk, and by taking, say, x/y non
parallel we get to 16MB contiguous chunks written in multi 4KB transactions
per IOnode in 64^3 lattices for configuration I/O.

I suspect this is fine for system performance.
2015-08-26 13:40:29 +01:00
neo
9adaeb061a More NEON functionalities 2015-07-21 11:52:15 +09:00
Peter Boyle
98c817df1b big commit fixing nocompiles in defective C++11 compilers (gcc, icpc). stared getting to
near the bleeding edge I guess
2015-06-30 15:03:11 +01:00
Azusa Yamaguchi
4e7300b68d uninitialised bug fix 2015-06-16 14:07:05 +01:00
Azusa Yamaguchi
68b82ddd99 const safety 2015-06-14 00:59:50 +01:00
Azusa Yamaguchi
351c2905f5 Compile fix 2015-06-05 10:29:42 +01:00
Azusa Yamaguchi
94ea84d83f Adding some wilson loop support 2015-06-05 10:02:36 +01:00