mirror of
https://github.com/paboyle/Grid.git
synced 2024-11-10 15:55:37 +00:00
Merge branch 'feature/feynman-rules' into feature/qed-fvol
This commit is contained in:
commit
94d8321d01
112
README.md
112
README.md
@ -16,11 +16,27 @@
|
||||
|
||||
**Data parallel C++ mathematical object library.**
|
||||
|
||||
Please send all pull requests to the `develop` branch.
|
||||
|
||||
License: GPL v2.
|
||||
|
||||
Last update 2016/08/03.
|
||||
Last update Nov 2016.
|
||||
|
||||
_Please send all pull requests to the `develop` branch._
|
||||
|
||||
### Bug report
|
||||
|
||||
_To help us tracking and solving more efficiently issues with Grid, please report problems using the issue system of GitHub rather than sending emails to Grid developers._
|
||||
|
||||
When you file an issue, please go though the following checklist:
|
||||
|
||||
1. Check that the code is pointing to the `HEAD` of `develop` or any commit in `master` which is tagged with a version number.
|
||||
2. Give a description of the target platform (CPU, network, compiler).
|
||||
3. Give the exact `configure` command used.
|
||||
4. Attach `config.log`.
|
||||
5. Attach `config.summary`.
|
||||
6. Attach the output of `make V=1`.
|
||||
7. Describe the issue and any previous attempt to solve it. If relevant, show how to reproduce the issue using a minimal working example.
|
||||
|
||||
|
||||
|
||||
### Description
|
||||
This library provides data parallel C++ container classes with internal memory layout
|
||||
@ -42,7 +58,7 @@ optimally use MPI, OpenMP and SIMD parallelism under the hood. This is a signifi
|
||||
for most programmers.
|
||||
|
||||
The layout transformations are parametrised by the SIMD vector length. This adapts according to the architecture.
|
||||
Presently SSE4 (128 bit) AVX, AVX2 (256 bit) and IMCI and AVX512 (512 bit) targets are supported (ARM NEON and BG/Q QPX on the way).
|
||||
Presently SSE4 (128 bit) AVX, AVX2, QPX (256 bit), IMCI, and AVX512 (512 bit) targets are supported (ARM NEON on the way).
|
||||
|
||||
These are presented as `vRealF`, `vRealD`, `vComplexF`, and `vComplexD` internal vector data types. These may be useful in themselves for other programmers.
|
||||
The corresponding scalar types are named `RealF`, `RealD`, `ComplexF` and `ComplexD`.
|
||||
@ -50,7 +66,7 @@ The corresponding scalar types are named `RealF`, `RealD`, `ComplexF` and `Compl
|
||||
MPI, OpenMP, and SIMD parallelism are present in the library.
|
||||
Please see https://arxiv.org/abs/1512.03487 for more detail.
|
||||
|
||||
### Installation
|
||||
### Quick start
|
||||
First, start by cloning the repository:
|
||||
|
||||
``` bash
|
||||
@ -71,12 +87,10 @@ mkdir build; cd build
|
||||
../configure --enable-precision=double --enable-simd=AVX --enable-comms=mpi-auto --prefix=<path>
|
||||
```
|
||||
|
||||
where `--enable-precision=` set the default precision (`single` or `double`),
|
||||
`--enable-simd=` set the SIMD type (see possible values below), `--enable-
|
||||
comms=` set the protocol used for communications (`none`, `mpi`, `mpi-auto` or
|
||||
`shmem`), and `<path>` should be replaced by the prefix path where you want to
|
||||
install Grid. The `mpi-auto` communication option set `configure` to determine
|
||||
automatically how to link to MPI. Other options are available, use `configure
|
||||
where `--enable-precision=` set the default precision,
|
||||
`--enable-simd=` set the SIMD type, `--enable-
|
||||
comms=`, and `<path>` should be replaced by the prefix path where you want to
|
||||
install Grid. Other options are detailed in the next section, you can also use `configure
|
||||
--help` to display them. Like with any other program using GNU autotool, the
|
||||
`CXX`, `CXXFLAGS`, `LDFLAGS`, ... environment variables can be modified to
|
||||
customise the build.
|
||||
@ -93,24 +107,86 @@ To minimise the build time, only the tests at the root of the `tests` directory
|
||||
make -C tests/<subdir> tests
|
||||
```
|
||||
|
||||
### Build configuration options
|
||||
|
||||
- `--prefix=<path>`: installation prefix for Grid.
|
||||
- `--with-gmp=<path>`: look for GMP in the UNIX prefix `<path>`
|
||||
- `--with-mpfr=<path>`: look for MPFR in the UNIX prefix `<path>`
|
||||
- `--with-fftw=<path>`: look for FFTW in the UNIX prefix `<path>`
|
||||
- `--enable-lapack[=<path>]`: enable LAPACK support in Lanczos eigensolver. A UNIX prefix containing the library can be specified (optional).
|
||||
- `--enable-mkl[=<path>]`: use Intel MKL for FFT (and LAPACK if enabled) routines. A UNIX prefix containing the library can be specified (optional).
|
||||
- `--enable-numa`: ???
|
||||
- `--enable-simd=<code>`: setup Grid for the SIMD target `<code>` (default: `GEN`). A list of possible SIMD targets is detailed in a section below.
|
||||
- `--enable-precision={single|double}`: set the default precision (default: `double`).
|
||||
- `--enable-precision=<comm>`: Use `<comm>` for message passing (default: `none`). A list of possible SIMD targets is detailed in a section below.
|
||||
- `--enable-rng={ranlux48|mt19937}`: choose the RNG (default: `ranlux48 `).
|
||||
- `--disable-timers`: disable system dependent high-resolution timers.
|
||||
- `--enable-chroma`: enable Chroma regression tests.
|
||||
|
||||
### Possible communication interfaces
|
||||
|
||||
The following options can be use with the `--enable-simd=` option to target different communication interfaces:
|
||||
|
||||
| `<comm>` | Description |
|
||||
| ------------- | -------------------------------------------- |
|
||||
| `none` | no communications |
|
||||
| `mpi[-auto]` | MPI communications |
|
||||
| `mpi3[-auto]` | MPI communications using MPI 3 shared memory |
|
||||
| `shmem ` | Cray SHMEM communications |
|
||||
|
||||
For `mpi` and `mpi3` the optional `-auto` suffix instructs the `configure` scripts to determine all the necessary compilation and linking flags. This is done by extracting the informations from the MPI wrapper specified in the environment variable `MPICXX` (if not specified `configure` will scan though a list of default names).
|
||||
|
||||
### Possible SIMD types
|
||||
|
||||
The following options can be use with the `--enable-simd=` option to target different SIMD instruction sets:
|
||||
|
||||
| String | Description |
|
||||
| `<code>` | Description |
|
||||
| ----------- | -------------------------------------- |
|
||||
| `GEN` | generic portable vector code |
|
||||
| `SSE4` | SSE 4.2 (128 bit) |
|
||||
| `AVX` | AVX (256 bit) |
|
||||
| `AVXFMA4` | AVX (256 bit) + FMA |
|
||||
| `AVXFMA` | AVX (256 bit) + FMA |
|
||||
| `AVXFMA4` | AVX (256 bit) + FMA4 |
|
||||
| `AVX2` | AVX 2 (256 bit) |
|
||||
| `AVX512` | AVX 512 bit |
|
||||
| `AVX512MIC` | AVX 512 bit for Intel MIC architecture |
|
||||
| `ICMI` | Intel ICMI instructions (512 bit) |
|
||||
| `QPX` | QPX (256 bit) |
|
||||
|
||||
Alternatively, some CPU codenames can be directly used:
|
||||
|
||||
| String | Description |
|
||||
| `<code>` | Description |
|
||||
| ----------- | -------------------------------------- |
|
||||
| `KNC` | [Intel Knights Corner](http://ark.intel.com/products/codename/57721/Knights-Corner) |
|
||||
| `KNL` | [Intel Knights Landing](http://ark.intel.com/products/codename/48999/Knights-Landing) |
|
||||
| `KNC` | [Intel Xeon Phi codename Knights Corner](http://ark.intel.com/products/codename/57721/Knights-Corner) |
|
||||
| `KNL` | [Intel Xeon Phi codename Knights Landing](http://ark.intel.com/products/codename/48999/Knights-Landing) |
|
||||
| `BGQ` | Blue Gene/Q |
|
||||
|
||||
#### Notes:
|
||||
- We currently support AVX512 only for the Intel compiler. Support for GCC and clang will appear in future versions.
|
||||
- For BG/Q only [bgclang](http://trac.alcf.anl.gov/projects/llvm-bgq) is supported. We do not presently plan to support more compilers for this platform.
|
||||
- BG/Q performances are currently rather poor. This is being investigated for future versions.
|
||||
|
||||
### Build setup for Intel Knights Landing platform
|
||||
|
||||
The following configuration is recommended for the Intel Knights Landing platform:
|
||||
|
||||
``` bash
|
||||
../configure --enable-precision=double\
|
||||
--enable-simd=KNL \
|
||||
--enable-comms=mpi3-auto \
|
||||
--with-gmp=<path> \
|
||||
--with-mpfr=<path> \
|
||||
--enable-mkl \
|
||||
CXX=icpc MPICXX=mpiicpc
|
||||
```
|
||||
|
||||
where `<path>` is the UNIX prefix where GMP and MPFR are installed. If you are working on a Cray machine that does not use the `mpiicpc` wrapper, please use:
|
||||
|
||||
``` bash
|
||||
../configure --enable-precision=double\
|
||||
--enable-simd=KNL \
|
||||
--enable-comms=mpi3 \
|
||||
--with-gmp=<path> \
|
||||
--with-mpfr=<path> \
|
||||
--enable-mkl \
|
||||
CXX=CC CC=cc
|
||||
```
|
||||
|
||||
|
4
VERSION
4
VERSION
@ -1,4 +1,6 @@
|
||||
Version : 0.5.0
|
||||
Version : 0.6.0
|
||||
|
||||
- AVX512, AVX2, AVX, SSE good
|
||||
- Clang 3.5 and above, ICPC v16 and above, GCC 4.9 and above
|
||||
- MPI and MPI3
|
||||
- HiRep, Smearing, Generic gauge group
|
||||
|
@ -153,9 +153,10 @@ int main (int argc, char ** argv)
|
||||
std::cout<<GridLogMessage << "norm result "<< norm2(result)<<std::endl;
|
||||
std::cout<<GridLogMessage << "norm ref "<< norm2(ref)<<std::endl;
|
||||
std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl;
|
||||
std::cout<<GridLogMessage << "mflop/s per node = "<< flops/(t1-t0)/NP<<std::endl;
|
||||
std::cout<<GridLogMessage << "mflop/s per rank = "<< flops/(t1-t0)/NP<<std::endl;
|
||||
err = ref-result;
|
||||
std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl;
|
||||
assert (norm2(err)< 1.0e-5 );
|
||||
Dw.Report();
|
||||
}
|
||||
|
||||
@ -192,7 +193,7 @@ int main (int argc, char ** argv)
|
||||
|
||||
std::cout<<GridLogMessage << "Called Dw s_inner "<<ncall<<" times in "<<t1-t0<<" us"<<std::endl;
|
||||
std::cout<<GridLogMessage << "mflop/s = "<< flops/(t1-t0)<<std::endl;
|
||||
std::cout<<GridLogMessage << "mflop/s per node = "<< flops/(t1-t0)/NP<<std::endl;
|
||||
std::cout<<GridLogMessage << "mflop/s per rank = "<< flops/(t1-t0)/NP<<std::endl;
|
||||
sDw.Report();
|
||||
|
||||
if(0){
|
||||
@ -208,8 +209,7 @@ int main (int argc, char ** argv)
|
||||
|
||||
std::cout<<GridLogMessage<< "res norms "<< norm2(result)<<" " <<norm2(sresult)<<std::endl;
|
||||
|
||||
|
||||
RealF sum=0;
|
||||
RealD sum=0;
|
||||
for(int x=0;x<latt4[0];x++){
|
||||
for(int y=0;y<latt4[1];y++){
|
||||
for(int z=0;z<latt4[2];z++){
|
||||
@ -227,12 +227,12 @@ int main (int argc, char ** argv)
|
||||
}
|
||||
}}}}}
|
||||
std::cout<<GridLogMessage<<" difference between normal and simd is "<<sum<<std::endl;
|
||||
assert (sum< 1.0e-5 );
|
||||
|
||||
|
||||
if (1) {
|
||||
|
||||
LatticeFermion sr_eo(sFGrid);
|
||||
LatticeFermion serr(sFGrid);
|
||||
|
||||
LatticeFermion ssrc_e (sFrbGrid);
|
||||
LatticeFermion ssrc_o (sFrbGrid);
|
||||
@ -244,8 +244,6 @@ int main (int argc, char ** argv)
|
||||
|
||||
setCheckerboard(sr_eo,ssrc_o);
|
||||
setCheckerboard(sr_eo,ssrc_e);
|
||||
serr = sr_eo-ssrc;
|
||||
std::cout<<GridLogMessage << "EO src norm diff "<< norm2(serr)<<std::endl;
|
||||
|
||||
sr_e = zero;
|
||||
sr_o = zero;
|
||||
@ -263,7 +261,7 @@ int main (int argc, char ** argv)
|
||||
double flops=(1344.0*volume*ncall)/2;
|
||||
|
||||
std::cout<<GridLogMessage << "sDeo mflop/s = "<< flops/(t1-t0)<<std::endl;
|
||||
std::cout<<GridLogMessage << "sDeo mflop/s per node "<< flops/(t1-t0)/NP<<std::endl;
|
||||
std::cout<<GridLogMessage << "sDeo mflop/s per rank "<< flops/(t1-t0)/NP<<std::endl;
|
||||
sDw.Report();
|
||||
|
||||
sDw.DhopEO(ssrc_o,sr_e,DaggerNo);
|
||||
@ -273,9 +271,18 @@ int main (int argc, char ** argv)
|
||||
pickCheckerboard(Even,ssrc_e,sresult);
|
||||
pickCheckerboard(Odd ,ssrc_o,sresult);
|
||||
ssrc_e = ssrc_e - sr_e;
|
||||
RealD error = norm2(ssrc_e);
|
||||
|
||||
std::cout<<GridLogMessage << "sE norm diff "<< norm2(ssrc_e)<< " vec nrm"<<norm2(sr_e) <<std::endl;
|
||||
ssrc_o = ssrc_o - sr_o;
|
||||
|
||||
error+= norm2(ssrc_o);
|
||||
std::cout<<GridLogMessage << "sO norm diff "<< norm2(ssrc_o)<< " vec nrm"<<norm2(sr_o) <<std::endl;
|
||||
if(error>1.0e-5) {
|
||||
setCheckerboard(ssrc,ssrc_o);
|
||||
setCheckerboard(ssrc,ssrc_e);
|
||||
std::cout<< ssrc << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -307,7 +314,7 @@ int main (int argc, char ** argv)
|
||||
std::cout<<GridLogMessage << "norm ref "<< norm2(ref)<<std::endl;
|
||||
err = ref-result;
|
||||
std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl;
|
||||
|
||||
assert(norm2(err)<1.0e-5);
|
||||
LatticeFermion src_e (FrbGrid);
|
||||
LatticeFermion src_o (FrbGrid);
|
||||
LatticeFermion r_e (FrbGrid);
|
||||
@ -334,7 +341,7 @@ int main (int argc, char ** argv)
|
||||
double flops=(1344.0*volume*ncall)/2;
|
||||
|
||||
std::cout<<GridLogMessage << "Deo mflop/s = "<< flops/(t1-t0)<<std::endl;
|
||||
std::cout<<GridLogMessage << "Deo mflop/s per node "<< flops/(t1-t0)/NP<<std::endl;
|
||||
std::cout<<GridLogMessage << "Deo mflop/s per rank "<< flops/(t1-t0)/NP<<std::endl;
|
||||
Dw.Report();
|
||||
}
|
||||
Dw.DhopEO(src_o,r_e,DaggerNo);
|
||||
@ -350,11 +357,14 @@ int main (int argc, char ** argv)
|
||||
|
||||
err = r_eo-result;
|
||||
std::cout<<GridLogMessage << "norm diff "<< norm2(err)<<std::endl;
|
||||
assert(norm2(err)<1.0e-5);
|
||||
|
||||
pickCheckerboard(Even,src_e,err);
|
||||
pickCheckerboard(Odd,src_o,err);
|
||||
std::cout<<GridLogMessage << "norm diff even "<< norm2(src_e)<<std::endl;
|
||||
std::cout<<GridLogMessage << "norm diff odd "<< norm2(src_o)<<std::endl;
|
||||
assert(norm2(src_e)<1.0e-5);
|
||||
assert(norm2(src_o)<1.0e-5);
|
||||
|
||||
|
||||
}
|
||||
|
236
configure.ac
236
configure.ac
@ -1,5 +1,5 @@
|
||||
AC_PREREQ([2.63])
|
||||
AC_INIT([Grid], [0.5.1-dev], [https://github.com/paboyle/Grid], [Grid])
|
||||
AC_INIT([Grid], [0.6.0], [https://github.com/paboyle/Grid], [Grid])
|
||||
AC_CANONICAL_BUILD
|
||||
AC_CANONICAL_HOST
|
||||
AC_CANONICAL_TARGET
|
||||
@ -9,18 +9,29 @@ AC_CONFIG_SRCDIR([lib/Grid.h])
|
||||
AC_CONFIG_HEADERS([lib/Config.h])
|
||||
m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])])
|
||||
|
||||
|
||||
############### Checks for programs
|
||||
AC_LANG(C++)
|
||||
CXXFLAGS="-O3 $CXXFLAGS"
|
||||
AC_PROG_CXX
|
||||
AC_PROG_RANLIB
|
||||
|
||||
############ openmp ###############
|
||||
############### Get compiler informations
|
||||
AC_LANG([C++])
|
||||
AX_CXX_COMPILE_STDCXX_11([noext],[mandatory])
|
||||
AX_COMPILER_VENDOR
|
||||
AC_DEFINE_UNQUOTED([CXX_COMP_VENDOR],["$ax_cv_cxx_compiler_vendor"],
|
||||
[vendor of C++ compiler that will compile the code])
|
||||
AX_GXX_VERSION
|
||||
AC_DEFINE_UNQUOTED([GXX_VERSION],["$GXX_VERSION"],
|
||||
[version of g++ that will compile the code])
|
||||
|
||||
############### Checks for typedefs, structures, and compiler characteristics
|
||||
AC_TYPE_SIZE_T
|
||||
AC_TYPE_UINT32_T
|
||||
AC_TYPE_UINT64_T
|
||||
|
||||
############### OpenMP
|
||||
AC_OPENMP
|
||||
|
||||
ac_openmp=no
|
||||
|
||||
if test "${OPENMP_CXXFLAGS}X" != "X"; then
|
||||
ac_openmp=yes
|
||||
AM_CXXFLAGS="$OPENMP_CXXFLAGS $AM_CXXFLAGS"
|
||||
@ -37,12 +48,7 @@ AC_CHECK_HEADERS(execinfo.h)
|
||||
AC_CHECK_DECLS([ntohll],[], [], [[#include <arpa/inet.h>]])
|
||||
AC_CHECK_DECLS([be64toh],[], [], [[#include <arpa/inet.h>]])
|
||||
|
||||
############### Checks for typedefs, structures, and compiler characteristics
|
||||
AC_TYPE_SIZE_T
|
||||
AC_TYPE_UINT32_T
|
||||
AC_TYPE_UINT64_T
|
||||
|
||||
############### GMP and MPFR #################
|
||||
############### GMP and MPFR
|
||||
AC_ARG_WITH([gmp],
|
||||
[AS_HELP_STRING([--with-gmp=prefix],
|
||||
[try this for a non-standard install prefix of the GMP library])],
|
||||
@ -54,7 +60,14 @@ AC_ARG_WITH([mpfr],
|
||||
[AM_CXXFLAGS="-I$with_mpfr/include $AM_CXXFLAGS"]
|
||||
[AM_LDFLAGS="-L$with_mpfr/lib $AM_LDFLAGS"])
|
||||
|
||||
################## lapack ####################
|
||||
############### FFTW3
|
||||
AC_ARG_WITH([fftw],
|
||||
[AS_HELP_STRING([--with-fftw=prefix],
|
||||
[try this for a non-standard install prefix of the FFTW3 library])],
|
||||
[AM_CXXFLAGS="-I$with_fftw/include $AM_CXXFLAGS"]
|
||||
[AM_LDFLAGS="-L$with_fftw/lib $AM_LDFLAGS"])
|
||||
|
||||
############### lapack
|
||||
AC_ARG_ENABLE([lapack],
|
||||
[AC_HELP_STRING([--enable-lapack=yes|no|prefix], [enable LAPACK])],
|
||||
[ac_LAPACK=${enable_lapack}], [ac_LAPACK=no])
|
||||
@ -67,10 +80,26 @@ case ${ac_LAPACK} in
|
||||
*)
|
||||
AM_CXXFLAGS="-I$ac_LAPACK/include $AM_CXXFLAGS"
|
||||
AM_LDFLAGS="-L$ac_LAPACK/lib $AM_LDFLAGS"
|
||||
AC_DEFINE([USE_LAPACK],[1],[use LAPACK])
|
||||
AC_DEFINE([USE_LAPACK],[1],[use LAPACK]);;
|
||||
esac
|
||||
|
||||
################## first-touch ####################
|
||||
############### MKL
|
||||
AC_ARG_ENABLE([mkl],
|
||||
[AC_HELP_STRING([--enable-mkl=yes|no|prefix], [enable Intel MKL for LAPACK & FFTW])],
|
||||
[ac_MKL=${enable_mkl}], [ac_MKL=no])
|
||||
|
||||
case ${ac_MKL} in
|
||||
no)
|
||||
;;
|
||||
yes)
|
||||
AC_DEFINE([USE_MKL], [1], [Define to 1 if you use the Intel MKL]);;
|
||||
*)
|
||||
AM_CXXFLAGS="-I$ac_MKL/include $AM_CXXFLAGS"
|
||||
AM_LDFLAGS="-L$ac_MKL/lib $AM_LDFLAGS"
|
||||
AC_DEFINE([USE_MKL], [1], [Define to 1 if you use the Intel MKL]);;
|
||||
esac
|
||||
|
||||
############### first-touch
|
||||
AC_ARG_ENABLE([numa],
|
||||
[AC_HELP_STRING([--enable-numa=yes|no|prefix], [enable first touch numa opt])],
|
||||
[ac_NUMA=${enable_NUMA}],[ac_NUMA=no])
|
||||
@ -84,56 +113,44 @@ case ${ac_NUMA} in
|
||||
AC_DEFINE([GRID_NUMA],[1],[First touch numa locality]);;
|
||||
esac
|
||||
|
||||
################## FFTW3 ####################
|
||||
AC_ARG_WITH([fftw],
|
||||
[AS_HELP_STRING([--with-fftw=prefix],
|
||||
[try this for a non-standard install prefix of the FFTW3 library])],
|
||||
[AM_CXXFLAGS="-I$with_fftw/include $AM_CXXFLAGS"]
|
||||
[AM_LDFLAGS="-L$with_fftw/lib $AM_LDFLAGS"])
|
||||
|
||||
################ Get compiler informations
|
||||
AC_LANG([C++])
|
||||
AX_CXX_COMPILE_STDCXX_11([noext],[mandatory])
|
||||
AX_COMPILER_VENDOR
|
||||
AC_DEFINE_UNQUOTED([CXX_COMP_VENDOR],["$ax_cv_cxx_compiler_vendor"],
|
||||
[vendor of C++ compiler that will compile the code])
|
||||
AX_GXX_VERSION
|
||||
AC_DEFINE_UNQUOTED([GXX_VERSION],["$GXX_VERSION"],
|
||||
[version of g++ that will compile the code])
|
||||
|
||||
############### Checks for library functions
|
||||
CXXFLAGS_CPY=$CXXFLAGS
|
||||
LDFLAGS_CPY=$LDFLAGS
|
||||
CXXFLAGS="$AM_CXXFLAGS $CXXFLAGS"
|
||||
LDFLAGS="$AM_LDFLAGS $LDFLAGS"
|
||||
|
||||
AC_CHECK_FUNCS([gettimeofday])
|
||||
AC_CHECK_LIB([gmp],[__gmpf_init],
|
||||
[AC_CHECK_LIB([mpfr],[mpfr_init],
|
||||
[AC_DEFINE([HAVE_LIBMPFR], [1], [Define to 1 if you have the `MPFR' library (-lmpfr).])]
|
||||
[have_mpfr=true]
|
||||
[LIBS="$LIBS -lmpfr"],
|
||||
[AC_MSG_ERROR([MPFR library not found])])]
|
||||
[AC_DEFINE([HAVE_LIBGMP], [1], [Define to 1 if you have the `GMP' library (-lgmp).])]
|
||||
[have_gmp=true]
|
||||
[LIBS="$LIBS -lgmp"],
|
||||
[AC_MSG_WARN([**** GMP library not found, Grid can still compile but RHMC will not work ****])])
|
||||
|
||||
if test "${ac_MKL}x" != "nox"; then
|
||||
AC_SEARCH_LIBS([mkl_set_interface_layer], [mkl_rt], [],
|
||||
[AC_MSG_ERROR("MKL enabled but library not found")])
|
||||
fi
|
||||
|
||||
AC_SEARCH_LIBS([__gmpf_init], [gmp],
|
||||
[AC_SEARCH_LIBS([mpfr_init], [mpfr],
|
||||
[AC_DEFINE([HAVE_LIBMPFR], [1],
|
||||
[Define to 1 if you have the `MPFR' library])]
|
||||
[have_mpfr=true], [AC_MSG_ERROR([MPFR library not found])])]
|
||||
[AC_DEFINE([HAVE_LIBGMP], [1], [Define to 1 if you have the `GMP' library])]
|
||||
[have_gmp=true])
|
||||
|
||||
if test "${ac_LAPACK}x" != "nox"; then
|
||||
AC_CHECK_LIB([lapack],[LAPACKE_sbdsdc],[],
|
||||
AC_SEARCH_LIBS([LAPACKE_sbdsdc], [lapack], [],
|
||||
[AC_MSG_ERROR("LAPACK enabled but library not found")])
|
||||
fi
|
||||
AC_CHECK_LIB([fftw3],[fftw_execute],
|
||||
[AC_DEFINE([HAVE_FFTW],[1],[Define to 1 if you have the `FFTW' library (-lfftw3).])]
|
||||
[have_fftw=true]
|
||||
[LIBS="$LIBS -lfftw3 -lfftw3f"],
|
||||
[AC_MSG_WARN([**** FFTW library not found, Grid can still compile but FFT-based routines will not work ****])])
|
||||
|
||||
AC_SEARCH_LIBS([fftw_execute], [fftw3],
|
||||
[AC_SEARCH_LIBS([fftwf_execute], [fftw3f], [],
|
||||
[AC_MSG_ERROR("single precision FFTW library not found")])]
|
||||
[AC_DEFINE([HAVE_FFTW], [1], [Define to 1 if you have the `FFTW' library])]
|
||||
[have_fftw=true])
|
||||
|
||||
CXXFLAGS=$CXXFLAGS_CPY
|
||||
LDFLAGS=$LDFLAGS_CPY
|
||||
|
||||
############### SIMD instruction selection
|
||||
AC_ARG_ENABLE([simd],[AC_HELP_STRING([--enable-simd=SSE4|AVX|AVXFMA4|AVXFMA|AVX2|AVX512|AVX512MIC|IMCI|KNL|KNC],\
|
||||
[Select instructions to be SSE4.0, AVX 1.0, AVX 2.0+FMA, AVX 512, IMCI])],\
|
||||
[ac_SIMD=${enable_simd}],[ac_SIMD=GEN])
|
||||
AC_ARG_ENABLE([simd],[AC_HELP_STRING([--enable-simd=<code>],
|
||||
[select SIMD target (cf. README.md)])], [ac_SIMD=${enable_simd}], [ac_SIMD=GEN])
|
||||
|
||||
case ${ax_cv_cxx_compiler_vendor} in
|
||||
clang|gnu)
|
||||
@ -153,12 +170,15 @@ case ${ax_cv_cxx_compiler_vendor} in
|
||||
AVX2)
|
||||
AC_DEFINE([AVX2],[1],[AVX2 intrinsics])
|
||||
SIMD_FLAGS='-mavx2 -mfma';;
|
||||
AVX512|AVX512MIC|KNL)
|
||||
AVX512)
|
||||
AC_DEFINE([AVX512],[1],[AVX512 intrinsics])
|
||||
SIMD_FLAGS='-mavx512f -mavx512pf -mavx512er -mavx512cd';;
|
||||
IMCI|KNC)
|
||||
KNC)
|
||||
AC_DEFINE([IMCI],[1],[IMCI intrinsics for Knights Corner])
|
||||
SIMD_FLAGS='';;
|
||||
KNL)
|
||||
AC_DEFINE([AVX512],[1],[AVX512 intrinsics])
|
||||
SIMD_FLAGS='-march=knl';;
|
||||
GEN)
|
||||
AC_DEFINE([GENERIC_VEC],[1],[generic vector code])
|
||||
SIMD_FLAGS='';;
|
||||
@ -176,9 +196,6 @@ case ${ax_cv_cxx_compiler_vendor} in
|
||||
AVX)
|
||||
AC_DEFINE([AVX1],[1],[AVX intrinsics])
|
||||
SIMD_FLAGS='-mavx -xavx';;
|
||||
AVXFMA4)
|
||||
AC_DEFINE([AVXFMA4],[1],[AVX intrinsics with FMA4])
|
||||
SIMD_FLAGS='-mavx -mfma';;
|
||||
AVXFMA)
|
||||
AC_DEFINE([AVXFMA],[1],[AVX intrinsics with FMA4])
|
||||
SIMD_FLAGS='-mavx -mfma';;
|
||||
@ -188,12 +205,12 @@ case ${ax_cv_cxx_compiler_vendor} in
|
||||
AVX512)
|
||||
AC_DEFINE([AVX512],[1],[AVX512 intrinsics])
|
||||
SIMD_FLAGS='-xcore-avx512';;
|
||||
AVX512MIC|KNL)
|
||||
AC_DEFINE([AVX512],[1],[AVX512 intrinsics for Knights Landing])
|
||||
SIMD_FLAGS='-xmic-avx512';;
|
||||
IMCI|KNC)
|
||||
KNC)
|
||||
AC_DEFINE([IMCI],[1],[IMCI Intrinsics for Knights Corner])
|
||||
SIMD_FLAGS='';;
|
||||
KNL)
|
||||
AC_DEFINE([AVX512],[1],[AVX512 intrinsics for Knights Landing])
|
||||
SIMD_FLAGS='-xmic-avx512';;
|
||||
GEN)
|
||||
AC_DEFINE([GENERIC_VEC],[1],[generic vector code])
|
||||
SIMD_FLAGS='';;
|
||||
@ -208,14 +225,18 @@ AM_CXXFLAGS="$SIMD_FLAGS $AM_CXXFLAGS"
|
||||
AM_CFLAGS="$SIMD_FLAGS $AM_CFLAGS"
|
||||
|
||||
case ${ac_SIMD} in
|
||||
AVX512|AVX512MIC|KNL)
|
||||
AVX512|KNL)
|
||||
AC_DEFINE([TEST_ZMM],[1],[compile ZMM test]);;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
|
||||
############### precision selection
|
||||
AC_ARG_ENABLE([precision],[AC_HELP_STRING([--enable-precision=single|double],[Select default word size of Real])],[ac_PRECISION=${enable_precision}],[ac_PRECISION=double])
|
||||
############### Precision selection
|
||||
AC_ARG_ENABLE([precision],
|
||||
[AC_HELP_STRING([--enable-precision=single|double],
|
||||
[Select default word size of Real])],
|
||||
[ac_PRECISION=${enable_precision}],[ac_PRECISION=double])
|
||||
|
||||
case ${ac_PRECISION} in
|
||||
single)
|
||||
AC_DEFINE([GRID_DEFAULT_PRECISION_SINGLE],[1],[GRID_DEFAULT_PRECISION is SINGLE] )
|
||||
@ -226,25 +247,17 @@ case ${ac_PRECISION} in
|
||||
esac
|
||||
|
||||
############### communication type selection
|
||||
AC_ARG_ENABLE([comms],[AC_HELP_STRING([--enable-comms=none|mpi|mpi-auto|shmem],[Select communications])],[ac_COMMS=${enable_comms}],[ac_COMMS=none])
|
||||
AC_ARG_ENABLE([comms],[AC_HELP_STRING([--enable-comms=none|mpi|mpi-auto|mpi3|mpi3-auto|shmem],
|
||||
[Select communications])],[ac_COMMS=${enable_comms}],[ac_COMMS=none])
|
||||
|
||||
case ${ac_COMMS} in
|
||||
none)
|
||||
AC_DEFINE([GRID_COMMS_NONE],[1],[GRID_COMMS_NONE] )
|
||||
;;
|
||||
mpi-auto)
|
||||
AC_DEFINE([GRID_COMMS_MPI],[1],[GRID_COMMS_MPI] )
|
||||
LX_FIND_MPI
|
||||
if test "x$have_CXX_mpi" = 'xno'; then AC_MSG_ERROR(["MPI not found"]); fi
|
||||
AM_CXXFLAGS="$MPI_CXXFLAGS $AM_CXXFLAGS"
|
||||
AM_CFLAGS="$MPI_CFLAGS $AM_CFLAGS"
|
||||
AM_LDFLAGS="`echo $MPI_CXXLDFLAGS | sed -E 's/-l@<:@^ @:>@+//g'` $AM_LDFLAGS"
|
||||
LIBS="`echo $MPI_CXXLDFLAGS | sed -E 's/-L@<:@^ @:>@+//g'` $LIBS"
|
||||
;;
|
||||
mpi)
|
||||
mpi|mpi-auto)
|
||||
AC_DEFINE([GRID_COMMS_MPI],[1],[GRID_COMMS_MPI] )
|
||||
;;
|
||||
mpi3)
|
||||
mpi3|mpi3-auto)
|
||||
AC_DEFINE([GRID_COMMS_MPI3],[1],[GRID_COMMS_MPI3] )
|
||||
;;
|
||||
shmem)
|
||||
@ -254,9 +267,23 @@ case ${ac_COMMS} in
|
||||
AC_MSG_ERROR([${ac_COMMS} unsupported --enable-comms option]);
|
||||
;;
|
||||
esac
|
||||
case ${ac_COMMS} in
|
||||
*-auto)
|
||||
LX_FIND_MPI
|
||||
if test "x$have_CXX_mpi" = 'xno'; then AC_MSG_ERROR(["MPI not found"]); fi
|
||||
AM_CXXFLAGS="$MPI_CXXFLAGS $AM_CXXFLAGS"
|
||||
AM_CFLAGS="$MPI_CFLAGS $AM_CFLAGS"
|
||||
AM_LDFLAGS="`echo $MPI_CXXLDFLAGS | sed -E 's/-l@<:@^ @:>@+//g'` $AM_LDFLAGS"
|
||||
LIBS="`echo $MPI_CXXLDFLAGS | sed -E 's/-L@<:@^ @:>@+//g'` $LIBS";;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
|
||||
AM_CONDITIONAL(BUILD_COMMS_SHMEM,[ test "X${ac_COMMS}X" == "XshmemX" ])
|
||||
AM_CONDITIONAL(BUILD_COMMS_MPI,[ test "X${ac_COMMS}X" == "XmpiX" || test "X${ac_COMMS}X" == "Xmpi-autoX" ])
|
||||
AM_CONDITIONAL(BUILD_COMMS_MPI3,[ test "X${ac_COMMS}X" == "Xmpi3X"] )
|
||||
AM_CONDITIONAL(BUILD_COMMS_MPI,
|
||||
[ test "X${ac_COMMS}X" == "XmpiX" || test "X${ac_COMMS}X" == "Xmpi-autoX" ])
|
||||
AM_CONDITIONAL(BUILD_COMMS_MPI3,
|
||||
[ test "X${ac_COMMS}X" == "Xmpi3X" || test "X${ac_COMMS}X" == "Xmpi3-autoX" ])
|
||||
AM_CONDITIONAL(BUILD_COMMS_NONE,[ test "X${ac_COMMS}X" == "XnoneX" ])
|
||||
|
||||
############### RNG selection
|
||||
@ -276,10 +303,11 @@ case ${ac_RNG} in
|
||||
;;
|
||||
esac
|
||||
|
||||
############### timer option
|
||||
############### Timer option
|
||||
AC_ARG_ENABLE([timers],[AC_HELP_STRING([--enable-timers],\
|
||||
[Enable system dependent high res timers])],\
|
||||
[ac_TIMERS=${enable_timers}],[ac_TIMERS=yes])
|
||||
|
||||
case ${ac_TIMERS} in
|
||||
yes)
|
||||
AC_DEFINE([TIMERS_ON],[1],[TIMERS_ON] )
|
||||
@ -293,7 +321,9 @@ case ${ac_TIMERS} in
|
||||
esac
|
||||
|
||||
############### Chroma regression test
|
||||
AC_ARG_ENABLE([chroma],[AC_HELP_STRING([--enable-chroma],[Expect chroma compiled under c++11 ])],ac_CHROMA=yes,ac_CHROMA=no)
|
||||
AC_ARG_ENABLE([chroma],[AC_HELP_STRING([--enable-chroma],
|
||||
[Expect chroma compiled under c++11 ])],ac_CHROMA=yes,ac_CHROMA=no)
|
||||
|
||||
case ${ac_CHROMA} in
|
||||
yes|no)
|
||||
;;
|
||||
@ -301,6 +331,7 @@ case ${ac_CHROMA} in
|
||||
AC_MSG_ERROR([${ac_CHROMA} unsupported --enable-chroma option]);
|
||||
;;
|
||||
esac
|
||||
|
||||
AM_CONDITIONAL(BUILD_CHROMA_REGRESSION,[ test "X${ac_CHROMA}X" == "XyesX" ])
|
||||
|
||||
############### Doxygen
|
||||
@ -334,35 +365,36 @@ AC_CONFIG_FILES(programs/Makefile)
|
||||
AC_CONFIG_FILES(programs/qed-fvol/Makefile)
|
||||
AC_OUTPUT
|
||||
|
||||
echo "
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Summary of configuration for $PACKAGE v$VERSION
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
----- PLATFORM ----------------------------------------
|
||||
- architecture (build) : $build_cpu
|
||||
- os (build) : $build_os
|
||||
- architecture (target) : $target_cpu
|
||||
- os (target) : $target_os
|
||||
- compiler vendor : ${ax_cv_cxx_compiler_vendor}
|
||||
- compiler version : ${ax_cv_gxx_version}
|
||||
architecture (build) : $build_cpu
|
||||
os (build) : $build_os
|
||||
architecture (target) : $target_cpu
|
||||
os (target) : $target_os
|
||||
compiler vendor : ${ax_cv_cxx_compiler_vendor}
|
||||
compiler version : ${ax_cv_gxx_version}
|
||||
----- BUILD OPTIONS -----------------------------------
|
||||
- SIMD : ${ac_SIMD}
|
||||
- Threading : ${ac_openmp}
|
||||
- Communications type : ${ac_COMMS}
|
||||
- Default precision : ${ac_PRECISION}
|
||||
- RNG choice : ${ac_RNG}
|
||||
- GMP : `if test "x$have_gmp" = xtrue; then echo yes; else echo no; fi`
|
||||
- LAPACK : ${ac_LAPACK}
|
||||
- FFTW : `if test "x$have_fftw" = xtrue; then echo yes; else echo no; fi`
|
||||
- build DOXYGEN documentation : `if test "x$enable_doc" = xyes; then echo yes; else echo no; fi`
|
||||
- graphs and diagrams : `if test "x$enable_dot" = xyes; then echo yes; else echo no; fi`
|
||||
SIMD : ${ac_SIMD}
|
||||
Threading : ${ac_openmp}
|
||||
Communications type : ${ac_COMMS}
|
||||
Default precision : ${ac_PRECISION}
|
||||
RNG choice : ${ac_RNG}
|
||||
GMP : `if test "x$have_gmp" = xtrue; then echo yes; else echo no; fi`
|
||||
LAPACK : ${ac_LAPACK}
|
||||
FFTW : `if test "x$have_fftw" = xtrue; then echo yes; else echo no; fi`
|
||||
build DOXYGEN documentation : `if test "x$enable_doc" = xyes; then echo yes; else echo no; fi`
|
||||
graphs and diagrams : `if test "x$enable_dot" = xyes; then echo yes; else echo no; fi`
|
||||
----- BUILD FLAGS -------------------------------------
|
||||
- CXXFLAGS:
|
||||
CXXFLAGS:
|
||||
`echo ${AM_CXXFLAGS} ${CXXFLAGS} | tr ' ' '\n' | sed 's/^-/ -/g'`
|
||||
- LDFLAGS:
|
||||
LDFLAGS:
|
||||
`echo ${AM_LDFLAGS} ${LDFLAGS} | tr ' ' '\n' | sed 's/^-/ -/g'`
|
||||
- LIBS:
|
||||
LIBS:
|
||||
`echo ${LIBS} | tr ' ' '\n' | sed 's/^-/ -/g'`
|
||||
-------------------------------------------------------
|
||||
"
|
||||
-------------------------------------------------------" > config.summary
|
||||
echo ""
|
||||
cat config.summary
|
||||
echo ""
|
||||
|
@ -145,7 +145,7 @@ public:
|
||||
|
||||
if ( bcast != ptr ) {
|
||||
std::printf("inconsistent alloc pe %d %lx %lx \n",shmem_my_pe(),bcast,ptr);std::fflush(stdout);
|
||||
BACKTRACEFILE();
|
||||
// BACKTRACEFILE();
|
||||
exit(0);
|
||||
}
|
||||
assert( bcast == (void *) ptr);
|
||||
@ -155,15 +155,6 @@ public:
|
||||
void deallocate(pointer __p, size_type) {
|
||||
shmem_free((void *)__p);
|
||||
}
|
||||
#elif defined(GRID_COMMS_MPI3)
|
||||
pointer allocate(size_type __n, const void* _p= 0)
|
||||
{
|
||||
#error "implement MPI3 windowed allocate"
|
||||
|
||||
}
|
||||
void deallocate(pointer __p, size_type) {
|
||||
#error "implement MPI3 windowed allocate"
|
||||
}
|
||||
#else
|
||||
pointer allocate(size_type __n, const void* _p= 0)
|
||||
{
|
||||
|
81
lib/FFT.h
81
lib/FFT.h
@ -30,7 +30,7 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
|
||||
#define _GRID_FFT_H_
|
||||
|
||||
#ifdef HAVE_FFTW
|
||||
#include <fftw3.h>
|
||||
#include <Grid/fftw/fftw3.h>
|
||||
#endif
|
||||
|
||||
|
||||
@ -164,7 +164,9 @@ namespace Grid {
|
||||
|
||||
template<class vobj>
|
||||
void FFT_dim(Lattice<vobj> &result,const Lattice<vobj> &source,int dim, int sign){
|
||||
|
||||
#ifndef HAVE_FFTW
|
||||
assert(0);
|
||||
#else
|
||||
conformable(result._grid,vgrid);
|
||||
conformable(source._grid,vgrid);
|
||||
|
||||
@ -183,23 +185,12 @@ namespace Grid {
|
||||
typedef typename vobj::scalar_object sobj;
|
||||
typedef typename sobj::scalar_type scalar;
|
||||
|
||||
/*
|
||||
std::cout << "FFT : vobj "<<demangle(typeid(vobj).name()) <<std::endl;
|
||||
std::cout << "FFT : sobj "<<demangle(typeid(sobj).name()) <<std::endl;
|
||||
std::cout << "FFT : scalar "<<demangle(typeid(scalar).name()) <<std::endl;
|
||||
*/
|
||||
Lattice<sobj> pgbuf(&pencil_g);
|
||||
|
||||
Lattice<vobj> ssource(vgrid); ssource =source;
|
||||
Lattice<sobj> pgsource(&pencil_g);
|
||||
Lattice<sobj> pgresult(&pencil_g); pgresult=zero;
|
||||
|
||||
#ifndef HAVE_FFTW
|
||||
assert(0);
|
||||
#else
|
||||
typedef typename FFTW<scalar>::FFTW_scalar FFTW_scalar;
|
||||
typedef typename FFTW<scalar>::FFTW_plan FFTW_plan;
|
||||
|
||||
{
|
||||
int Ncomp = sizeof(sobj)/sizeof(scalar);
|
||||
int Nlow = 1;
|
||||
for(int d=0;d<dim;d++){
|
||||
@ -221,8 +212,8 @@ namespace Grid {
|
||||
|
||||
FFTW_plan p;
|
||||
{
|
||||
FFTW_scalar *in = (FFTW_scalar *)&pgsource._odata[0];
|
||||
FFTW_scalar *out= (FFTW_scalar *)&pgresult._odata[0];
|
||||
FFTW_scalar *in = (FFTW_scalar *)&pgbuf._odata[0];
|
||||
FFTW_scalar *out= (FFTW_scalar *)&pgbuf._odata[0];
|
||||
p = FFTW<scalar>::fftw_plan_many_dft(rank,n,howmany,
|
||||
in,inembed,
|
||||
istride,idist,
|
||||
@ -231,78 +222,60 @@ namespace Grid {
|
||||
sign,FFTW_ESTIMATE);
|
||||
}
|
||||
|
||||
double add,mul,fma;
|
||||
FFTW<scalar>::fftw_flops(p,&add,&mul,&fma);
|
||||
flops_call = add+mul+2.0*fma;
|
||||
|
||||
GridStopWatch timer;
|
||||
|
||||
// Barrel shift and collect global pencil
|
||||
std::vector<int> lcoor(Nd), gcoor(Nd);
|
||||
result = source;
|
||||
for(int p=0;p<processors[dim];p++) {
|
||||
|
||||
for(int idx=0;idx<sgrid->lSites();idx++) {
|
||||
|
||||
std::vector<int> lcoor(Nd);
|
||||
sgrid->LocalIndexToLocalCoor(idx,lcoor);
|
||||
|
||||
sobj s;
|
||||
|
||||
peekLocalSite(s,ssource,lcoor);
|
||||
|
||||
peekLocalSite(s,result,lcoor);
|
||||
lcoor[dim]+=p*L;
|
||||
|
||||
pokeLocalSite(s,pgsource,lcoor);
|
||||
pokeLocalSite(s,pgbuf,lcoor);
|
||||
}
|
||||
|
||||
ssource = Cshift(ssource,dim,L);
|
||||
result = Cshift(result,dim,L);
|
||||
}
|
||||
|
||||
// Loop over orthog coords
|
||||
int NN=pencil_g.lSites();
|
||||
|
||||
GridStopWatch Timer;
|
||||
Timer.Start();
|
||||
|
||||
PARALLEL_FOR_LOOP
|
||||
GridStopWatch timer;
|
||||
timer.Start();
|
||||
//PARALLEL_FOR_LOOP
|
||||
for(int idx=0;idx<NN;idx++) {
|
||||
|
||||
std::vector<int> lcoor(Nd);
|
||||
pencil_g.LocalIndexToLocalCoor(idx,lcoor);
|
||||
|
||||
if ( lcoor[dim] == 0 ) { // restricts loop to plane at lcoor[dim]==0
|
||||
FFTW_scalar *in = (FFTW_scalar *)&pgsource._odata[idx];
|
||||
FFTW_scalar *out= (FFTW_scalar *)&pgresult._odata[idx];
|
||||
FFTW_scalar *in = (FFTW_scalar *)&pgbuf._odata[idx];
|
||||
FFTW_scalar *out= (FFTW_scalar *)&pgbuf._odata[idx];
|
||||
FFTW<scalar>::fftw_execute_dft(p,in,out);
|
||||
}
|
||||
}
|
||||
timer.Stop();
|
||||
|
||||
Timer.Stop();
|
||||
usec += Timer.useconds();
|
||||
// performance counting
|
||||
double add,mul,fma;
|
||||
FFTW<scalar>::fftw_flops(p,&add,&mul,&fma);
|
||||
flops_call = add+mul+2.0*fma;
|
||||
usec += timer.useconds();
|
||||
flops+= flops_call*NN;
|
||||
|
||||
// writing out result
|
||||
int pc = processor_coor[dim];
|
||||
for(int idx=0;idx<sgrid->lSites();idx++) {
|
||||
std::vector<int> lcoor(Nd);
|
||||
sgrid->LocalIndexToLocalCoor(idx,lcoor);
|
||||
std::vector<int> gcoor = lcoor;
|
||||
// extract the result
|
||||
gcoor = lcoor;
|
||||
sobj s;
|
||||
gcoor[dim] = lcoor[dim]+L*pc;
|
||||
peekLocalSite(s,pgresult,gcoor);
|
||||
peekLocalSite(s,pgbuf,gcoor);
|
||||
s = s * div;
|
||||
pokeLocalSite(s,result,lcoor);
|
||||
}
|
||||
|
||||
// destroying plan
|
||||
FFTW<scalar>::fftw_destroy_plan(p);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
|
||||
#endif
|
||||
|
16
lib/Init.cc
16
lib/Init.cc
@ -195,14 +195,17 @@ std::string GridCmdVectorIntToString(const std::vector<int> & vec){
|
||||
/////////////////////////////////////////////////////////
|
||||
//
|
||||
/////////////////////////////////////////////////////////
|
||||
static int Grid_is_initialised = 0;
|
||||
|
||||
|
||||
void Grid_init(int *argc,char ***argv)
|
||||
{
|
||||
GridLogger::StopWatch.Start();
|
||||
|
||||
CartesianCommunicator::Init(argc,argv);
|
||||
|
||||
// Parse command line args.
|
||||
|
||||
GridLogger::StopWatch.Start();
|
||||
|
||||
std::string arg;
|
||||
std::vector<std::string> logstreams;
|
||||
std::string defaultLog("Error,Warning,Message,Performance");
|
||||
@ -240,11 +243,14 @@ void Grid_init(int *argc,char ***argv)
|
||||
if( GridCmdOptionExists(*argv,*argv+*argc,"--lebesgue") ){
|
||||
LebesgueOrder::UseLebesgueOrder=1;
|
||||
}
|
||||
|
||||
if( GridCmdOptionExists(*argv,*argv+*argc,"--cacheblocking") ){
|
||||
arg= GridCmdOptionPayload(*argv,*argv+*argc,"--cacheblocking");
|
||||
GridCmdOptionIntVector(arg,LebesgueOrder::Block);
|
||||
}
|
||||
if( GridCmdOptionExists(*argv,*argv+*argc,"--timestamp") ){
|
||||
GridLogTimestamp(1);
|
||||
}
|
||||
|
||||
GridParseLayout(*argv,*argc,
|
||||
Grid_default_latt,
|
||||
Grid_default_mpi);
|
||||
@ -298,12 +304,14 @@ void Grid_init(int *argc,char ***argv)
|
||||
std::cout << "GNU General Public License for more details."<<std::endl;
|
||||
std::cout << COL_BACKGROUND <<std::endl;
|
||||
std::cout << std::endl;
|
||||
|
||||
Grid_is_initialised = 1;
|
||||
}
|
||||
|
||||
|
||||
void Grid_finalize(void)
|
||||
{
|
||||
#ifdef GRID_COMMS_MPI
|
||||
#if defined (GRID_COMMS_MPI) || defined (GRID_COMMS_MPI3)
|
||||
MPI_Finalize();
|
||||
Grid_unquiesce_nodes();
|
||||
#endif
|
||||
|
@ -33,6 +33,7 @@ namespace Grid {
|
||||
|
||||
void Grid_init(int *argc,char ***argv);
|
||||
void Grid_finalize(void);
|
||||
|
||||
// internal, controled with --handle
|
||||
void Grid_sa_signal_handler(int sig,siginfo_t *si,void * ptr);
|
||||
void Grid_debug_handler_init(void);
|
||||
@ -44,6 +45,7 @@ namespace Grid {
|
||||
const std::vector<int> &GridDefaultMpi(void);
|
||||
const int &GridThreads(void) ;
|
||||
void GridSetThreads(int t) ;
|
||||
void GridLogTimestamp(int);
|
||||
|
||||
// Common parsing chores
|
||||
std::string GridCmdOptionPayload(char ** begin, char ** end, const std::string & option);
|
||||
|
@ -49,8 +49,13 @@ namespace Grid {
|
||||
}
|
||||
|
||||
GridStopWatch Logger::StopWatch;
|
||||
int Logger::timestamp;
|
||||
std::ostream Logger::devnull(0);
|
||||
|
||||
void GridLogTimestamp(int on){
|
||||
Logger::Timestamp(on);
|
||||
}
|
||||
|
||||
Colours GridLogColours(0);
|
||||
GridLogger GridLogError(1, "Error", GridLogColours, "RED");
|
||||
GridLogger GridLogWarning(1, "Warning", GridLogColours, "YELLOW");
|
||||
@ -88,7 +93,7 @@ void GridLogConfigure(std::vector<std::string> &logstreams) {
|
||||
////////////////////////////////////////////////////////////
|
||||
void Grid_quiesce_nodes(void) {
|
||||
int me = 0;
|
||||
#ifdef GRID_COMMS_MPI
|
||||
#if defined(GRID_COMMS_MPI) || defined(GRID_COMMS_MPI3)
|
||||
MPI_Comm_rank(MPI_COMM_WORLD, &me);
|
||||
#endif
|
||||
#ifdef GRID_COMMS_SHMEM
|
||||
|
23
lib/Log.h
23
lib/Log.h
@ -39,8 +39,9 @@
|
||||
|
||||
namespace Grid {
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Dress the output; use std::chrono for time stamping via the StopWatch class
|
||||
int Rank(void); // used for early stage debug before library init
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
||||
class Colours{
|
||||
@ -55,7 +56,6 @@ public:
|
||||
|
||||
void Active(bool activate){
|
||||
is_active=activate;
|
||||
|
||||
if (is_active){
|
||||
colour["BLACK"] ="\033[30m";
|
||||
colour["RED"] ="\033[31m";
|
||||
@ -77,10 +77,7 @@ public:
|
||||
colour["WHITE"] ="";
|
||||
colour["NORMAL"]="";
|
||||
}
|
||||
|
||||
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
|
||||
@ -88,6 +85,7 @@ class Logger {
|
||||
protected:
|
||||
Colours &Painter;
|
||||
int active;
|
||||
static int timestamp;
|
||||
std::string name, topName;
|
||||
std::string COLOUR;
|
||||
|
||||
@ -99,8 +97,7 @@ public:
|
||||
std::string evidence() {return Painter.colour["YELLOW"];}
|
||||
std::string colour() {return Painter.colour[COLOUR];}
|
||||
|
||||
Logger(std::string topNm, int on, std::string nm, Colours& col_class, std::string col)
|
||||
: active(on),
|
||||
Logger(std::string topNm, int on, std::string nm, Colours& col_class, std::string col) : active(on),
|
||||
name(nm),
|
||||
topName(topNm),
|
||||
Painter(col_class),
|
||||
@ -108,16 +105,20 @@ public:
|
||||
|
||||
void Active(int on) {active = on;};
|
||||
int isActive(void) {return active;};
|
||||
static void Timestamp(int on) {timestamp = on;};
|
||||
|
||||
friend std::ostream& operator<< (std::ostream& stream, Logger& log){
|
||||
|
||||
if ( log.active ) {
|
||||
stream << log.background()<< log.topName << log.background()<< " : ";
|
||||
stream << log.colour() <<std::setw(14) << std::left << log.name << log.background() << " : ";
|
||||
if ( log.timestamp ) {
|
||||
StopWatch.Stop();
|
||||
GridTime now = StopWatch.Elapsed();
|
||||
StopWatch.Start();
|
||||
stream << log.background()<< log.topName << log.background()<< " : ";
|
||||
stream << log.colour() <<std::setw(14) << std::left << log.name << log.background() << " : ";
|
||||
stream << log.evidence()<< now << log.background() << " : " << log.colour();
|
||||
stream << log.evidence()<< now << log.background() << " : " ;
|
||||
}
|
||||
stream << log.colour();
|
||||
return stream;
|
||||
} else {
|
||||
return devnull;
|
||||
@ -150,7 +151,7 @@ extern void * Grid_backtrace_buffer[_NBACKTRACE];
|
||||
|
||||
#define BACKTRACEFILE() {\
|
||||
char string[20]; \
|
||||
std::sprintf(string,"backtrace.%d",Rank()); \
|
||||
std::sprintf(string,"backtrace.%d",CartesianCommunicator::RankWorld()); \
|
||||
std::FILE * fp = std::fopen(string,"w"); \
|
||||
BACKTRACEFP(fp)\
|
||||
std::fclose(fp); \
|
||||
|
@ -1,18 +1,22 @@
|
||||
extra_sources=
|
||||
if BUILD_COMMS_MPI
|
||||
extra_sources+=communicator/Communicator_mpi.cc
|
||||
extra_sources+=communicator/Communicator_base.cc
|
||||
endif
|
||||
|
||||
if BUILD_COMMS_MPI3
|
||||
extra_sources+=communicator/Communicator_mpi3.cc
|
||||
extra_sources+=communicator/Communicator_base.cc
|
||||
endif
|
||||
|
||||
if BUILD_COMMS_SHMEM
|
||||
extra_sources+=communicator/Communicator_shmem.cc
|
||||
extra_sources+=communicator/Communicator_base.cc
|
||||
endif
|
||||
|
||||
if BUILD_COMMS_NONE
|
||||
extra_sources+=communicator/Communicator_none.cc
|
||||
extra_sources+=communicator/Communicator_base.cc
|
||||
endif
|
||||
|
||||
#
|
||||
|
267
lib/Stencil.h
267
lib/Stencil.h
@ -70,20 +70,20 @@
|
||||
|
||||
namespace Grid {
|
||||
|
||||
template<class vobj,class cobj,class compressor> void
|
||||
Gather_plane_simple_table_compute (const Lattice<vobj> &rhs,commVector<cobj> &buffer,int dimension,int plane,int cbmask,compressor &compress, int off,std::vector<std::pair<int,int> >& table)
|
||||
inline void Gather_plane_simple_table_compute (GridBase *grid,int dimension,int plane,int cbmask,
|
||||
int off,std::vector<std::pair<int,int> > & table)
|
||||
{
|
||||
table.resize(0);
|
||||
int rd = rhs._grid->_rdimensions[dimension];
|
||||
int rd = grid->_rdimensions[dimension];
|
||||
|
||||
if ( !rhs._grid->CheckerBoarded(dimension) ) {
|
||||
if ( !grid->CheckerBoarded(dimension) ) {
|
||||
cbmask = 0x3;
|
||||
}
|
||||
int so= plane*rhs._grid->_ostride[dimension]; // base offset for start of plane
|
||||
int e1=rhs._grid->_slice_nblock[dimension];
|
||||
int e2=rhs._grid->_slice_block[dimension];
|
||||
int so= plane*grid->_ostride[dimension]; // base offset for start of plane
|
||||
int e1=grid->_slice_nblock[dimension];
|
||||
int e2=grid->_slice_block[dimension];
|
||||
|
||||
int stride=rhs._grid->_slice_stride[dimension];
|
||||
int stride=grid->_slice_stride[dimension];
|
||||
if ( cbmask == 0x3 ) {
|
||||
table.resize(e1*e2);
|
||||
for(int n=0;n<e1;n++){
|
||||
@ -99,7 +99,7 @@ Gather_plane_simple_table_compute (const Lattice<vobj> &rhs,commVector<cobj> &bu
|
||||
for(int n=0;n<e1;n++){
|
||||
for(int b=0;b<e2;b++){
|
||||
int o = n*stride;
|
||||
int ocb=1<<rhs._grid->CheckerBoardFromOindexTable(o+b);
|
||||
int ocb=1<<grid->CheckerBoardFromOindexTable(o+b);
|
||||
if ( ocb &cbmask ) {
|
||||
table[bo]=std::pair<int,int>(bo,o+b); bo++;
|
||||
}
|
||||
@ -109,8 +109,7 @@ Gather_plane_simple_table_compute (const Lattice<vobj> &rhs,commVector<cobj> &bu
|
||||
}
|
||||
|
||||
template<class vobj,class cobj,class compressor> void
|
||||
Gather_plane_simple_table (std::vector<std::pair<int,int> >& table,const Lattice<vobj> &rhs,commVector<cobj> &buffer,
|
||||
compressor &compress, int off,int so)
|
||||
Gather_plane_simple_table (std::vector<std::pair<int,int> >& table,const Lattice<vobj> &rhs,cobj *buffer,compressor &compress, int off,int so)
|
||||
{
|
||||
PARALLEL_FOR_LOOP
|
||||
for(int i=0;i<table.size();i++){
|
||||
@ -118,19 +117,6 @@ PARALLEL_FOR_LOOP
|
||||
}
|
||||
}
|
||||
|
||||
template<class vobj,class cobj,class compressor> void
|
||||
Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,int dimension,int plane,int cbmask,compressor &compress, int off,
|
||||
double &t_table ,double & t_data )
|
||||
{
|
||||
std::vector<std::pair<int,int> > table;
|
||||
Gather_plane_simple_table_compute (rhs, buffer,dimension,plane,cbmask,compress,off,table);
|
||||
int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane
|
||||
Gather_plane_simple_table (table,rhs,buffer,compress,off,so);
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
struct StencilEntry {
|
||||
uint64_t _offset;
|
||||
uint64_t _byte_offset;
|
||||
@ -143,6 +129,7 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
class CartesianStencil { // Stencil runs along coordinate axes only; NO diagonal fill in.
|
||||
public:
|
||||
|
||||
typedef CartesianCommunicator::CommsRequest_t CommsRequest_t;
|
||||
typedef uint32_t StencilInteger;
|
||||
typedef typename cobj::vector_type vector_type;
|
||||
typedef typename cobj::scalar_type scalar_type;
|
||||
@ -158,7 +145,6 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
Integer to_rank;
|
||||
Integer from_rank;
|
||||
Integer bytes;
|
||||
volatile Integer done;
|
||||
};
|
||||
|
||||
std::vector<Packet> Packets;
|
||||
@ -166,81 +152,53 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
int face_table_computed;
|
||||
std::vector<std::vector<std::pair<int,int> > > face_table ;
|
||||
|
||||
#define SEND_IMMEDIATE
|
||||
#define SERIAL_SENDS
|
||||
|
||||
void AddPacket(void *xmit,void * rcv, Integer to,Integer from,Integer bytes){
|
||||
#ifdef SEND_IMMEDIATE
|
||||
commtime-=usecond();
|
||||
_grid->SendToRecvFrom(xmit,to,rcv,from,bytes);
|
||||
commtime+=usecond();
|
||||
#endif
|
||||
Packet p;
|
||||
p.send_buf = xmit;
|
||||
p.recv_buf = rcv;
|
||||
p.to_rank = to;
|
||||
p.from_rank= from;
|
||||
p.bytes = bytes;
|
||||
p.done = 0;
|
||||
comms_bytes+=2.0*bytes;
|
||||
Packets.push_back(p);
|
||||
|
||||
}
|
||||
|
||||
#ifdef SERIAL_SENDS
|
||||
void Communicate(void ) {
|
||||
void CommunicateBegin(std::vector<std::vector<CommsRequest_t> > &reqs)
|
||||
{
|
||||
reqs.resize(Packets.size());
|
||||
commtime-=usecond();
|
||||
for(int i=0;i<Packets.size();i++){
|
||||
#ifndef SEND_IMMEDIATE
|
||||
_grid->SendToRecvFrom(
|
||||
_grid->StencilSendToRecvFromBegin(reqs[i],
|
||||
Packets[i].send_buf,
|
||||
Packets[i].to_rank,
|
||||
Packets[i].recv_buf,
|
||||
Packets[i].from_rank,
|
||||
Packets[i].bytes);
|
||||
#endif
|
||||
Packets[i].done = 1;
|
||||
/*
|
||||
}else{
|
||||
_grid->SendToRecvFromBegin(reqs[i],
|
||||
Packets[i].send_buf,
|
||||
Packets[i].to_rank,
|
||||
Packets[i].recv_buf,
|
||||
Packets[i].from_rank,
|
||||
Packets[i].bytes);
|
||||
}
|
||||
*/
|
||||
}
|
||||
commtime+=usecond();
|
||||
}
|
||||
#else
|
||||
void Communicate(void ) {
|
||||
typedef CartesianCommunicator::CommsRequest_t CommsRequest_t;
|
||||
std::vector<std::vector<CommsRequest_t> > reqs(Packets.size());
|
||||
void CommunicateComplete(std::vector<std::vector<CommsRequest_t> > &reqs)
|
||||
{
|
||||
commtime-=usecond();
|
||||
const int concurrency=2;
|
||||
for(int i=0;i<Packets.size();i+=concurrency){
|
||||
for(int ii=0;ii<concurrency;ii++){
|
||||
int j = i+ii;
|
||||
if ( j<Packets.size() ) {
|
||||
#ifndef SEND_IMMEDIATE
|
||||
_grid->SendToRecvFromBegin(reqs[j],
|
||||
Packets[j].send_buf,
|
||||
Packets[j].to_rank,
|
||||
Packets[j].recv_buf,
|
||||
Packets[j].from_rank,
|
||||
Packets[j].bytes);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
for(int ii=0;ii<concurrency;ii++){
|
||||
int j = i+ii;
|
||||
if ( j<Packets.size() ) {
|
||||
#ifndef SEND_IMMEDIATE
|
||||
_grid->SendToRecvFromComplete(reqs[i]);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
for(int ii=0;ii<concurrency;ii++){
|
||||
int j = i+ii;
|
||||
if ( j<Packets.size() ) {
|
||||
Packets[j].done = 1;
|
||||
}
|
||||
}
|
||||
|
||||
for(int i=0;i<Packets.size();i++){
|
||||
// if( ShmDirectCopy )
|
||||
_grid->StencilSendToRecvFromComplete(reqs[i]);
|
||||
// else
|
||||
// _grid->SendToRecvFromComplete(reqs[i]);
|
||||
}
|
||||
commtime+=usecond();
|
||||
}
|
||||
#endif
|
||||
|
||||
///////////////////////////////////////////
|
||||
// Simd merge queue for asynch comms
|
||||
@ -260,36 +218,19 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
m.rpointers= rpointers;
|
||||
m.buffer_size = buffer_size;
|
||||
m.packet_id = packet_id;
|
||||
#ifdef SEND_IMMEDIATE
|
||||
mergetime-=usecond();
|
||||
PARALLEL_FOR_LOOP
|
||||
for(int o=0;o<m.buffer_size;o++){
|
||||
merge1(m.mpointer[o],m.rpointers,o);
|
||||
}
|
||||
mergetime+=usecond();
|
||||
#else
|
||||
Mergers.push_back(m);
|
||||
#endif
|
||||
|
||||
}
|
||||
|
||||
void CommsMerge(void ) {
|
||||
//PARALLEL_NESTED_LOOP2
|
||||
|
||||
for(int i=0;i<Mergers.size();i++){
|
||||
|
||||
spintime-=usecond();
|
||||
int packet_id = Mergers[i].packet_id;
|
||||
while(! Packets[packet_id].done ); // spin for completion
|
||||
spintime+=usecond();
|
||||
|
||||
#ifndef SEND_IMMEDIATE
|
||||
mergetime-=usecond();
|
||||
PARALLEL_FOR_LOOP
|
||||
for(int o=0;o<Mergers[i].buffer_size;o++){
|
||||
merge1(Mergers[i].mpointer[o],Mergers[i].rpointers,o);
|
||||
}
|
||||
mergetime+=usecond();
|
||||
#endif
|
||||
|
||||
}
|
||||
}
|
||||
@ -312,24 +253,19 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
// Flat vector, change layout for cache friendly.
|
||||
Vector<StencilEntry> _entries;
|
||||
|
||||
inline StencilEntry * GetEntry(int &ptype,int point,int osite) { ptype = _permute_type[point]; return & _entries[point+_npoints*osite]; }
|
||||
|
||||
void PrecomputeByteOffsets(void){
|
||||
for(int i=0;i<_entries.size();i++){
|
||||
if( _entries[i]._is_local ) {
|
||||
_entries[i]._byte_offset = _entries[i]._offset*sizeof(vobj);
|
||||
} else {
|
||||
// PrecomputeByteOffsets [5] 16384/32768 140735768678528 140735781261056 2581581952
|
||||
_entries[i]._byte_offset = _entries[i]._offset*sizeof(cobj);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
inline uint64_t Touch(int ent) {
|
||||
// _mm_prefetch((char *)&_entries[ent],_MM_HINT_T0);
|
||||
}
|
||||
inline StencilEntry * GetEntry(int &ptype,int point,int osite) { ptype = _permute_type[point]; return & _entries[point+_npoints*osite]; }
|
||||
inline uint64_t GetInfo(int &ptype,int &local,int &perm,int point,int ent,uint64_t base) {
|
||||
uint64_t cbase = (uint64_t)&comm_buf[0];
|
||||
uint64_t cbase = (uint64_t)&u_recv_buf_p[0];
|
||||
local = _entries[ent]._is_local;
|
||||
perm = _entries[ent]._permute;
|
||||
if (perm) ptype = _permute_type[point];
|
||||
@ -340,20 +276,33 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
}
|
||||
}
|
||||
inline uint64_t GetPFInfo(int ent,uint64_t base) {
|
||||
uint64_t cbase = (uint64_t)&comm_buf[0];
|
||||
uint64_t cbase = (uint64_t)&u_recv_buf_p[0];
|
||||
int local = _entries[ent]._is_local;
|
||||
if (local) return base + _entries[ent]._byte_offset;
|
||||
else return cbase + _entries[ent]._byte_offset;
|
||||
}
|
||||
|
||||
// Comms buffers
|
||||
std::vector<commVector<scalar_object> > u_simd_send_buf;
|
||||
std::vector<commVector<scalar_object> > u_simd_recv_buf;
|
||||
commVector<cobj> u_send_buf;
|
||||
commVector<cobj> comm_buf;
|
||||
///////////////////////////////////////////////////////////
|
||||
// Unified Comms buffers for all directions
|
||||
///////////////////////////////////////////////////////////
|
||||
// Vectors that live on the symmetric heap in case of SHMEM
|
||||
// std::vector<commVector<scalar_object> > u_simd_send_buf_hide;
|
||||
// std::vector<commVector<scalar_object> > u_simd_recv_buf_hide;
|
||||
// commVector<cobj> u_send_buf_hide;
|
||||
// commVector<cobj> u_recv_buf_hide;
|
||||
|
||||
// These are used; either SHM objects or refs to the above symmetric heap vectors
|
||||
// depending on comms target
|
||||
cobj* u_recv_buf_p;
|
||||
cobj* u_send_buf_p;
|
||||
std::vector<scalar_object *> u_simd_send_buf;
|
||||
std::vector<scalar_object *> u_simd_recv_buf;
|
||||
|
||||
int u_comm_offset;
|
||||
int _unified_buffer_size;
|
||||
|
||||
cobj *CommBuf(void) { return u_recv_buf_p; }
|
||||
|
||||
/////////////////////////////////////////
|
||||
// Timing info; ugly; possibly temporary
|
||||
/////////////////////////////////////////
|
||||
@ -435,7 +384,6 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
int i = ii; // reverse direction to get SIMD comms done first
|
||||
int point = i;
|
||||
|
||||
|
||||
int dimension = directions[i];
|
||||
int displacement = distances[i];
|
||||
int shift = displacement;
|
||||
@ -482,18 +430,25 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
}
|
||||
}
|
||||
}
|
||||
u_send_buf.resize(_unified_buffer_size);
|
||||
comm_buf.resize(_unified_buffer_size);
|
||||
|
||||
PrecomputeByteOffsets();
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////
|
||||
// Try to allocate for receiving in a shared memory region, fall back to buffer
|
||||
/////////////////////////////////////////////////////////////////////////////////
|
||||
const int Nsimd = grid->Nsimd();
|
||||
|
||||
_grid->ShmBufferFreeAll();
|
||||
|
||||
u_simd_send_buf.resize(Nsimd);
|
||||
u_simd_recv_buf.resize(Nsimd);
|
||||
|
||||
u_send_buf_p=(cobj *)_grid->ShmBufferMalloc(_unified_buffer_size*sizeof(cobj));
|
||||
u_recv_buf_p=(cobj *)_grid->ShmBufferMalloc(_unified_buffer_size*sizeof(cobj));
|
||||
for(int l=0;l<Nsimd;l++){
|
||||
u_simd_send_buf[l].resize(_unified_buffer_size);
|
||||
u_simd_recv_buf[l].resize(_unified_buffer_size);
|
||||
u_simd_recv_buf[l] = (scalar_object *)_grid->ShmBufferMalloc(_unified_buffer_size*sizeof(scalar_object));
|
||||
u_simd_send_buf[l] = (scalar_object *)_grid->ShmBufferMalloc(_unified_buffer_size*sizeof(scalar_object));
|
||||
}
|
||||
|
||||
PrecomputeByteOffsets();
|
||||
}
|
||||
|
||||
void Local (int point, int dimension,int shiftpm,int cbmask)
|
||||
@ -717,38 +672,22 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
template<class compressor>
|
||||
void HaloExchange(const Lattice<vobj> &source,compressor &compress)
|
||||
template<class compressor> void HaloExchange(const Lattice<vobj> &source,compressor &compress)
|
||||
{
|
||||
std::vector<std::vector<CommsRequest_t> > reqs;
|
||||
calls++;
|
||||
Mergers.resize(0);
|
||||
Packets.resize(0);
|
||||
_grid->StencilBarrier();
|
||||
HaloGather(source,compress);
|
||||
this->Communicate();
|
||||
this->CommunicateBegin(reqs);
|
||||
_grid->StencilBarrier();
|
||||
this->CommunicateComplete(reqs);
|
||||
_grid->StencilBarrier();
|
||||
CommsMerge(); // spins
|
||||
}
|
||||
#if 0
|
||||
// Overlapping comms and compute typically slows down compute and is useless
|
||||
// unless memory bandwidth greatly exceeds network
|
||||
template<class compressor>
|
||||
std::thread HaloExchangeBegin(const Lattice<vobj> &source,compressor &compress) {
|
||||
Mergers.resize(0);
|
||||
Packets.resize(0);
|
||||
HaloGather(source,compress);
|
||||
return std::thread([&] { this->Communicate(); });
|
||||
}
|
||||
void HaloExchangeComplete(std::thread &thr)
|
||||
{
|
||||
CommsMerge(); // spins
|
||||
jointime-=usecond();
|
||||
thr.join();
|
||||
jointime+=usecond();
|
||||
}
|
||||
#endif
|
||||
template<class compressor>
|
||||
void HaloGatherDir(const Lattice<vobj> &source,compressor &compress,int point,int & face_idx)
|
||||
|
||||
template<class compressor> void HaloGatherDir(const Lattice<vobj> &source,compressor &compress,int point,int & face_idx)
|
||||
{
|
||||
int dimension = _directions[point];
|
||||
int displacement = _distances[point];
|
||||
@ -806,7 +745,6 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
assert(source._grid==_grid);
|
||||
halogtime-=usecond();
|
||||
|
||||
assert (comm_buf.size() == _unified_buffer_size );
|
||||
u_comm_offset=0;
|
||||
|
||||
// Gather all comms buffers
|
||||
@ -863,37 +801,48 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
if ( !face_table_computed ) {
|
||||
t_table-=usecond();
|
||||
face_table.resize(face_idx+1);
|
||||
Gather_plane_simple_table_compute (rhs,u_send_buf,dimension,sx,cbmask,compress,u_comm_offset,face_table[face_idx]);
|
||||
Gather_plane_simple_table_compute ((GridBase *)_grid,dimension,sx,cbmask,u_comm_offset,
|
||||
face_table[face_idx]);
|
||||
t_table+=usecond();
|
||||
}
|
||||
t_data-=usecond();
|
||||
Gather_plane_simple_table (face_table[face_idx],rhs,u_send_buf,compress,u_comm_offset,so);
|
||||
face_idx++;
|
||||
t_data+=usecond();
|
||||
gathertime+=usecond();
|
||||
|
||||
// Gather_plane_simple_stencil (rhs,u_send_buf,dimension,sx,cbmask,compress,u_comm_offset,t_table,t_data);
|
||||
|
||||
int rank = _grid->_processor;
|
||||
int recv_from_rank;
|
||||
int xmit_to_rank;
|
||||
_grid->ShiftedRanks(dimension,comm_proc,xmit_to_rank,recv_from_rank);
|
||||
|
||||
assert (xmit_to_rank != _grid->ThisRank());
|
||||
assert (recv_from_rank != _grid->ThisRank());
|
||||
|
||||
// FIXME Implement asynchronous send & also avoid buffer copy
|
||||
AddPacket((void *)&u_send_buf[u_comm_offset],
|
||||
(void *) &comm_buf[u_comm_offset],
|
||||
/////////////////////////////////////////////////////////
|
||||
// try the direct copy if possible
|
||||
/////////////////////////////////////////////////////////
|
||||
|
||||
|
||||
cobj *send_buf = (cobj *)_grid->ShmBufferTranslate(xmit_to_rank,u_recv_buf_p);
|
||||
if ( send_buf==NULL ) {
|
||||
send_buf = u_send_buf_p;
|
||||
}
|
||||
// std::cout << " send_bufs "<<std::hex<< send_buf <<" ubp "<<u_send_buf_p <<std::dec<<std::endl;
|
||||
t_data-=usecond();
|
||||
assert(u_send_buf_p!=NULL);
|
||||
assert(send_buf!=NULL);
|
||||
Gather_plane_simple_table (face_table[face_idx],rhs,send_buf,compress,u_comm_offset,so); face_idx++;
|
||||
t_data+=usecond();
|
||||
|
||||
AddPacket((void *)&send_buf[u_comm_offset],
|
||||
(void *)&u_recv_buf_p[u_comm_offset],
|
||||
xmit_to_rank,
|
||||
recv_from_rank,
|
||||
bytes);
|
||||
|
||||
gathertime+=usecond();
|
||||
u_comm_offset+=words;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
template<class compressor>
|
||||
void GatherSimd(const Lattice<vobj> &rhs,int dimension,int shift,int cbmask,compressor &compress,int & face_idx)
|
||||
{
|
||||
@ -974,10 +923,6 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
auto rp = &u_simd_recv_buf[i ][u_comm_offset];
|
||||
auto sp = &u_simd_send_buf[nbr_lane][u_comm_offset];
|
||||
|
||||
void *vrp = (void *)rp;
|
||||
void *vsp = (void *)sp;
|
||||
|
||||
|
||||
if(nbr_proc){
|
||||
|
||||
int recv_from_rank;
|
||||
@ -985,9 +930,17 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
|
||||
_grid->ShiftedRanks(dimension,nbr_proc,xmit_to_rank,recv_from_rank);
|
||||
|
||||
AddPacket( vsp,vrp,xmit_to_rank,recv_from_rank,bytes);
|
||||
scalar_object *shm = (scalar_object *) _grid->ShmBufferTranslate(recv_from_rank,sp);
|
||||
// if ((ShmDirectCopy==0)||(shm==NULL)) {
|
||||
if (shm==NULL) {
|
||||
shm = rp;
|
||||
}
|
||||
|
||||
rpointers[i] = rp;
|
||||
// if Direct, StencilSendToRecvFrom will suppress copy to a peer on node
|
||||
// assuming above pointer flip
|
||||
AddPacket((void *)sp,(void *)rp,xmit_to_rank,recv_from_rank,bytes);
|
||||
|
||||
rpointers[i] = shm;
|
||||
|
||||
} else {
|
||||
|
||||
@ -996,7 +949,7 @@ Gather_plane_simple_stencil (const Lattice<vobj> &rhs,commVector<cobj> &buffer,i
|
||||
}
|
||||
}
|
||||
|
||||
AddMerge(&comm_buf[u_comm_offset],rpointers,buffer_size,Packets.size()-1);
|
||||
AddMerge(&u_recv_buf_p[u_comm_offset],rpointers,buffer_size,Packets.size()-1);
|
||||
|
||||
u_comm_offset +=buffer_size;
|
||||
}
|
||||
|
@ -127,6 +127,22 @@ class GridThread {
|
||||
ThreadBarrier();
|
||||
};
|
||||
|
||||
static void bcopy(const void *src, void *dst, size_t len) {
|
||||
#ifdef GRID_OMP
|
||||
#pragma omp parallel
|
||||
{
|
||||
const char *c_src =(char *) src;
|
||||
char *c_dest=(char *) dst;
|
||||
int me,mywork,myoff;
|
||||
GridThread::GetWorkBarrier(len,me, mywork,myoff);
|
||||
bcopy(&c_src[myoff],&c_dest[myoff],mywork);
|
||||
}
|
||||
#else
|
||||
bcopy(src,dst,len);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -31,7 +31,11 @@ Author: paboyle <paboyle@ph.ed.ac.uk>
|
||||
|
||||
#include <string.h> //memset
|
||||
#ifdef USE_LAPACK
|
||||
#include <lapacke.h>
|
||||
void LAPACK_dstegr(char *jobz, char *range, int *n, double *d, double *e,
|
||||
double *vl, double *vu, int *il, int *iu, double *abstol,
|
||||
int *m, double *w, double *z, int *ldz, int *isuppz,
|
||||
double *work, int *lwork, int *iwork, int *liwork,
|
||||
int *info);
|
||||
#endif
|
||||
#include "DenseMatrix.h"
|
||||
#include "EigenSort.h"
|
||||
|
@ -77,7 +77,7 @@ public:
|
||||
// GridCartesian / GridRedBlackCartesian
|
||||
////////////////////////////////////////////////////////////////
|
||||
virtual int CheckerBoarded(int dim)=0;
|
||||
virtual int CheckerBoard(std::vector<int> site)=0;
|
||||
virtual int CheckerBoard(std::vector<int> &site)=0;
|
||||
virtual int CheckerBoardDestination(int source_cb,int shift,int dim)=0;
|
||||
virtual int CheckerBoardShift(int source_cb,int dim,int shift,int osite)=0;
|
||||
virtual int CheckerBoardShiftForCB(int source_cb,int dim,int shift,int cb)=0;
|
||||
|
@ -49,7 +49,7 @@ public:
|
||||
virtual int CheckerBoarded(int dim){
|
||||
return 0;
|
||||
}
|
||||
virtual int CheckerBoard(std::vector<int> site){
|
||||
virtual int CheckerBoard(std::vector<int> &site){
|
||||
return 0;
|
||||
}
|
||||
virtual int CheckerBoardDestination(int cb,int shift,int dim){
|
||||
|
@ -49,7 +49,7 @@ public:
|
||||
if( dim==_checker_dim) return 1;
|
||||
else return 0;
|
||||
}
|
||||
virtual int CheckerBoard(std::vector<int> site){
|
||||
virtual int CheckerBoard(std::vector<int> &site){
|
||||
int linear=0;
|
||||
assert(site.size()==_ndimension);
|
||||
for(int d=0;d<_ndimension;d++){
|
||||
|
131
lib/communicator/Communicator_base.cc
Normal file
131
lib/communicator/Communicator_base.cc
Normal file
@ -0,0 +1,131 @@
|
||||
/*************************************************************************************
|
||||
|
||||
Grid physics library, www.github.com/paboyle/Grid
|
||||
|
||||
Source file: ./lib/communicator/Communicator_none.cc
|
||||
|
||||
Copyright (C) 2015
|
||||
|
||||
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation; either version 2 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License along
|
||||
with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
See the full license in the file "LICENSE" in the top level distribution directory
|
||||
*************************************************************************************/
|
||||
/* END LEGAL */
|
||||
#include "Grid.h"
|
||||
namespace Grid {
|
||||
|
||||
///////////////////////////////////////////////////////////////
|
||||
// Info that is setup once and indept of cartesian layout
|
||||
///////////////////////////////////////////////////////////////
|
||||
int CartesianCommunicator::ShmRank;
|
||||
int CartesianCommunicator::ShmSize;
|
||||
int CartesianCommunicator::GroupRank;
|
||||
int CartesianCommunicator::GroupSize;
|
||||
int CartesianCommunicator::WorldRank;
|
||||
int CartesianCommunicator::WorldSize;
|
||||
int CartesianCommunicator::Slave;
|
||||
void * CartesianCommunicator::ShmCommBuf;
|
||||
|
||||
/////////////////////////////////
|
||||
// Alloc, free shmem region
|
||||
/////////////////////////////////
|
||||
void *CartesianCommunicator::ShmBufferMalloc(size_t bytes){
|
||||
// bytes = (bytes+sizeof(vRealD))&(~(sizeof(vRealD)-1));// align up bytes
|
||||
void *ptr = (void *)heap_top;
|
||||
heap_top += bytes;
|
||||
heap_bytes+= bytes;
|
||||
assert(heap_bytes < MAX_MPI_SHM_BYTES);
|
||||
return ptr;
|
||||
}
|
||||
void CartesianCommunicator::ShmBufferFreeAll(void) {
|
||||
heap_top =(size_t)ShmBufferSelf();
|
||||
heap_bytes=0;
|
||||
}
|
||||
|
||||
/////////////////////////////////
|
||||
// Grid information queries
|
||||
/////////////////////////////////
|
||||
int CartesianCommunicator::IsBoss(void) { return _processor==0; };
|
||||
int CartesianCommunicator::BossRank(void) { return 0; };
|
||||
int CartesianCommunicator::ThisRank(void) { return _processor; };
|
||||
const std::vector<int> & CartesianCommunicator::ThisProcessorCoor(void) { return _processor_coor; };
|
||||
const std::vector<int> & CartesianCommunicator::ProcessorGrid(void) { return _processors; };
|
||||
int CartesianCommunicator::ProcessorCount(void) { return _Nprocessors; };
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
// very VERY rarely (Log, serial RNG) we need world without a grid
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
int CartesianCommunicator::RankWorld(void){ return WorldRank; };
|
||||
int CartesianCommunicator::Ranks (void) { return WorldSize; };
|
||||
int CartesianCommunicator::Nodes (void) { return GroupSize; };
|
||||
int CartesianCommunicator::Cores (void) { return ShmSize; };
|
||||
int CartesianCommunicator::NodeRank (void) { return GroupRank; };
|
||||
int CartesianCommunicator::CoreRank (void) { return ShmRank; };
|
||||
|
||||
void CartesianCommunicator::GlobalSum(ComplexF &c)
|
||||
{
|
||||
GlobalSumVector((float *)&c,2);
|
||||
}
|
||||
void CartesianCommunicator::GlobalSumVector(ComplexF *c,int N)
|
||||
{
|
||||
GlobalSumVector((float *)c,2*N);
|
||||
}
|
||||
void CartesianCommunicator::GlobalSum(ComplexD &c)
|
||||
{
|
||||
GlobalSumVector((double *)&c,2);
|
||||
}
|
||||
void CartesianCommunicator::GlobalSumVector(ComplexD *c,int N)
|
||||
{
|
||||
GlobalSumVector((double *)c,2*N);
|
||||
}
|
||||
|
||||
#ifndef GRID_COMMS_MPI3
|
||||
|
||||
void CartesianCommunicator::StencilSendToRecvFromBegin(std::vector<CommsRequest_t> &list,
|
||||
void *xmit,
|
||||
int xmit_to_rank,
|
||||
void *recv,
|
||||
int recv_from_rank,
|
||||
int bytes)
|
||||
{
|
||||
SendToRecvFromBegin(list,xmit,xmit_to_rank,recv,recv_from_rank,bytes);
|
||||
}
|
||||
void CartesianCommunicator::StencilSendToRecvFromComplete(std::vector<CommsRequest_t> &waitall)
|
||||
{
|
||||
SendToRecvFromComplete(waitall);
|
||||
}
|
||||
void CartesianCommunicator::StencilBarrier(void){};
|
||||
|
||||
commVector<uint8_t> CartesianCommunicator::ShmBufStorageVector;
|
||||
|
||||
void *CartesianCommunicator::ShmBufferSelf(void) { return ShmCommBuf; }
|
||||
|
||||
void *CartesianCommunicator::ShmBuffer(int rank) {
|
||||
return NULL;
|
||||
}
|
||||
void *CartesianCommunicator::ShmBufferTranslate(int rank,void * local_p) {
|
||||
return NULL;
|
||||
}
|
||||
void CartesianCommunicator::ShmInitGeneric(void){
|
||||
ShmBufStorageVector.resize(MAX_MPI_SHM_BYTES);
|
||||
ShmCommBuf=(void *)&ShmBufStorageVector[0];
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
}
|
||||
|
@ -40,27 +40,42 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
|
||||
#ifdef GRID_COMMS_SHMEM
|
||||
#include <mpp/shmem.h>
|
||||
#endif
|
||||
|
||||
namespace Grid {
|
||||
|
||||
class CartesianCommunicator {
|
||||
public:
|
||||
|
||||
// Communicator should know nothing of the physics grid, only processor grid.
|
||||
// 65536 ranks per node adequate for now
|
||||
// 128MB shared memory for comms enought for 48^4 local vol comms
|
||||
// Give external control (command line override?) of this
|
||||
|
||||
static const int MAXLOG2RANKSPERNODE = 16;
|
||||
static const uint64_t MAX_MPI_SHM_BYTES = 128*1024*1024;
|
||||
|
||||
// Communicator should know nothing of the physics grid, only processor grid.
|
||||
int _Nprocessors; // How many in all
|
||||
std::vector<int> _processors; // Which dimensions get relayed out over processors lanes.
|
||||
int _processor; // linear processor rank
|
||||
std::vector<int> _processor_coor; // linear processor coordinate
|
||||
unsigned long _ndimension;
|
||||
|
||||
#ifdef GRID_COMMS_MPI
|
||||
MPI_Comm communicator;
|
||||
typedef MPI_Request CommsRequest_t;
|
||||
#elif GRID_COMMS_MPI3
|
||||
#if defined (GRID_COMMS_MPI) || defined (GRID_COMMS_MPI3)
|
||||
MPI_Comm communicator;
|
||||
static MPI_Comm communicator_world;
|
||||
typedef MPI_Request CommsRequest_t;
|
||||
#else
|
||||
typedef int CommsRequest_t;
|
||||
#endif
|
||||
|
||||
const int MAXLOG2RANKSPERNODE = 16; // 65536 ranks per node adequate for now
|
||||
|
||||
////////////////////////////////////////////////////////////////////
|
||||
// Helper functionality for SHM Windows common to all other impls
|
||||
////////////////////////////////////////////////////////////////////
|
||||
// Longer term; drop this in favour of a master / slave model with
|
||||
// cartesian communicator on a subset of ranks, slave ranks controlled
|
||||
// by group leader with data xfer via shared memory
|
||||
////////////////////////////////////////////////////////////////////
|
||||
#ifdef GRID_COMMS_MPI3
|
||||
std::vector<int> WorldDims;
|
||||
std::vector<int> GroupDims;
|
||||
std::vector<int> ShmDims;
|
||||
@ -69,68 +84,87 @@ class CartesianCommunicator {
|
||||
std::vector<int> ShmCoor;
|
||||
std::vector<int> WorldCoor;
|
||||
|
||||
int GroupRank;
|
||||
int ShmRank;
|
||||
int WorldRank;
|
||||
|
||||
int GroupSize;
|
||||
int ShmSize;
|
||||
int WorldSize;
|
||||
static std::vector<int> GroupRanks;
|
||||
static std::vector<int> MyGroup;
|
||||
static int ShmSetup;
|
||||
static MPI_Win ShmWindow;
|
||||
static MPI_Comm ShmComm;
|
||||
|
||||
std::vector<int> LexicographicToWorldRank;
|
||||
#else
|
||||
typedef int CommsRequest_t;
|
||||
#endif
|
||||
|
||||
static std::vector<void *> ShmCommBufs;
|
||||
#else
|
||||
static void ShmInitGeneric(void);
|
||||
static commVector<uint8_t> ShmBufStorageVector;
|
||||
#endif
|
||||
static void * ShmCommBuf;
|
||||
size_t heap_top;
|
||||
size_t heap_bytes;
|
||||
void *ShmBufferSelf(void);
|
||||
void *ShmBuffer(int rank);
|
||||
void *ShmBufferTranslate(int rank,void * local_p);
|
||||
void *ShmBufferMalloc(size_t bytes);
|
||||
void ShmBufferFreeAll(void) ;
|
||||
|
||||
////////////////////////////////////////////////
|
||||
// Must call in Grid startup
|
||||
////////////////////////////////////////////////
|
||||
static void Init(int *argc, char ***argv);
|
||||
|
||||
// Constructor
|
||||
////////////////////////////////////////////////
|
||||
// Constructor of any given grid
|
||||
////////////////////////////////////////////////
|
||||
CartesianCommunicator(const std::vector<int> &pdimensions_in);
|
||||
|
||||
// Wraps MPI_Cart routines
|
||||
////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Wraps MPI_Cart routines, or implements equivalent on other impls
|
||||
////////////////////////////////////////////////////////////////////////////////////////
|
||||
void ShiftedRanks(int dim,int shift,int & source, int & dest);
|
||||
int RankFromProcessorCoor(std::vector<int> &coor);
|
||||
void ProcessorCoorFromRank(int rank,std::vector<int> &coor);
|
||||
|
||||
/////////////////////////////////
|
||||
// Grid information queries
|
||||
// Grid information and queries
|
||||
/////////////////////////////////
|
||||
int IsBoss(void) { return _processor==0; };
|
||||
int BossRank(void) { return 0; };
|
||||
int ThisRank(void) { return _processor; };
|
||||
const std::vector<int> & ThisProcessorCoor(void) { return _processor_coor; };
|
||||
const std::vector<int> & ProcessorGrid(void) { return _processors; };
|
||||
int ProcessorCount(void) { return _Nprocessors; };
|
||||
static int ShmRank;
|
||||
static int ShmSize;
|
||||
static int GroupSize;
|
||||
static int GroupRank;
|
||||
static int WorldRank;
|
||||
static int WorldSize;
|
||||
static int Slave;
|
||||
|
||||
int IsBoss(void) ;
|
||||
int BossRank(void) ;
|
||||
int ThisRank(void) ;
|
||||
const std::vector<int> & ThisProcessorCoor(void) ;
|
||||
const std::vector<int> & ProcessorGrid(void) ;
|
||||
int ProcessorCount(void) ;
|
||||
static int Ranks (void);
|
||||
static int Nodes (void);
|
||||
static int Cores (void);
|
||||
static int NodeRank (void);
|
||||
static int CoreRank (void);
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
// very VERY rarely (Log, serial RNG) we need world without a grid
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
static int RankWorld(void) ;
|
||||
static void BroadcastWorld(int root,void* data, int bytes);
|
||||
|
||||
////////////////////////////////////////////////////////////
|
||||
// Reduction
|
||||
////////////////////////////////////////////////////////////
|
||||
void GlobalSum(RealF &);
|
||||
void GlobalSumVector(RealF *,int N);
|
||||
|
||||
void GlobalSum(RealD &);
|
||||
void GlobalSumVector(RealD *,int N);
|
||||
|
||||
void GlobalSum(uint32_t &);
|
||||
void GlobalSum(uint64_t &);
|
||||
|
||||
void GlobalSum(ComplexF &c)
|
||||
{
|
||||
GlobalSumVector((float *)&c,2);
|
||||
}
|
||||
void GlobalSumVector(ComplexF *c,int N)
|
||||
{
|
||||
GlobalSumVector((float *)c,2*N);
|
||||
}
|
||||
|
||||
void GlobalSum(ComplexD &c)
|
||||
{
|
||||
GlobalSumVector((double *)&c,2);
|
||||
}
|
||||
void GlobalSumVector(ComplexD *c,int N)
|
||||
{
|
||||
GlobalSumVector((double *)c,2*N);
|
||||
}
|
||||
void GlobalSum(ComplexF &c);
|
||||
void GlobalSumVector(ComplexF *c,int N);
|
||||
void GlobalSum(ComplexD &c);
|
||||
void GlobalSumVector(ComplexD *c,int N);
|
||||
|
||||
template<class obj> void GlobalSum(obj &o){
|
||||
typedef typename obj::scalar_type scalar_type;
|
||||
@ -138,6 +172,7 @@ class CartesianCommunicator {
|
||||
scalar_type * ptr = (scalar_type *)& o;
|
||||
GlobalSumVector(ptr,words);
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////
|
||||
// Face exchange, buffer swap in translational invariant way
|
||||
////////////////////////////////////////////////////////////
|
||||
@ -159,8 +194,19 @@ class CartesianCommunicator {
|
||||
void *recv,
|
||||
int recv_from_rank,
|
||||
int bytes);
|
||||
|
||||
void SendToRecvFromComplete(std::vector<CommsRequest_t> &waitall);
|
||||
|
||||
void StencilSendToRecvFromBegin(std::vector<CommsRequest_t> &list,
|
||||
void *xmit,
|
||||
int xmit_to_rank,
|
||||
void *recv,
|
||||
int recv_from_rank,
|
||||
int bytes);
|
||||
|
||||
void StencilSendToRecvFromComplete(std::vector<CommsRequest_t> &waitall);
|
||||
void StencilBarrier(void);
|
||||
|
||||
////////////////////////////////////////////////////////////
|
||||
// Barrier
|
||||
////////////////////////////////////////////////////////////
|
||||
@ -170,13 +216,12 @@ class CartesianCommunicator {
|
||||
// Broadcast a buffer and composite larger
|
||||
////////////////////////////////////////////////////////////
|
||||
void Broadcast(int root,void* data, int bytes);
|
||||
|
||||
template<class obj> void Broadcast(int root,obj &data)
|
||||
{
|
||||
Broadcast(root,(void *)&data,sizeof(data));
|
||||
};
|
||||
|
||||
static void BroadcastWorld(int root,void* data, int bytes);
|
||||
|
||||
};
|
||||
}
|
||||
|
||||
|
@ -30,6 +30,12 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
|
||||
|
||||
namespace Grid {
|
||||
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Info that is setup once and indept of cartesian layout
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
MPI_Comm CartesianCommunicator::communicator_world;
|
||||
|
||||
// Should error check all MPI calls.
|
||||
void CartesianCommunicator::Init(int *argc, char ***argv) {
|
||||
int flag;
|
||||
@ -37,12 +43,15 @@ void CartesianCommunicator::Init(int *argc, char ***argv) {
|
||||
if ( !flag ) {
|
||||
MPI_Init(argc,argv);
|
||||
}
|
||||
}
|
||||
|
||||
int Rank(void) {
|
||||
int pe;
|
||||
MPI_Comm_rank(MPI_COMM_WORLD,&pe);
|
||||
return pe;
|
||||
MPI_Comm_dup (MPI_COMM_WORLD,&communicator_world);
|
||||
MPI_Comm_rank(communicator_world,&WorldRank);
|
||||
MPI_Comm_size(communicator_world,&WorldSize);
|
||||
ShmRank=0;
|
||||
ShmSize=1;
|
||||
GroupRank=WorldRank;
|
||||
GroupSize=WorldSize;
|
||||
Slave =0;
|
||||
ShmInitGeneric();
|
||||
}
|
||||
|
||||
CartesianCommunicator::CartesianCommunicator(const std::vector<int> &processors)
|
||||
@ -54,7 +63,7 @@ CartesianCommunicator::CartesianCommunicator(const std::vector<int> &processors)
|
||||
_processors = processors;
|
||||
_processor_coor.resize(_ndimension);
|
||||
|
||||
MPI_Cart_create(MPI_COMM_WORLD, _ndimension,&_processors[0],&periodic[0],1,&communicator);
|
||||
MPI_Cart_create(communicator_world, _ndimension,&_processors[0],&periodic[0],1,&communicator);
|
||||
MPI_Comm_rank(communicator,&_processor);
|
||||
MPI_Cart_coords(communicator,_processor,_ndimension,&_processor_coor[0]);
|
||||
|
||||
@ -67,7 +76,6 @@ CartesianCommunicator::CartesianCommunicator(const std::vector<int> &processors)
|
||||
|
||||
assert(Size==_Nprocessors);
|
||||
}
|
||||
|
||||
void CartesianCommunicator::GlobalSum(uint32_t &u){
|
||||
int ierr=MPI_Allreduce(MPI_IN_PLACE,&u,1,MPI_UINT32_T,MPI_SUM,communicator);
|
||||
assert(ierr==0);
|
||||
@ -168,7 +176,6 @@ void CartesianCommunicator::SendToRecvFromComplete(std::vector<CommsRequest_t> &
|
||||
int nreq=list.size();
|
||||
std::vector<MPI_Status> status(nreq);
|
||||
int ierr = MPI_Waitall(nreq,&list[0],&status[0]);
|
||||
|
||||
assert(ierr==0);
|
||||
}
|
||||
|
||||
@ -187,14 +194,17 @@ void CartesianCommunicator::Broadcast(int root,void* data, int bytes)
|
||||
communicator);
|
||||
assert(ierr==0);
|
||||
}
|
||||
|
||||
///////////////////////////////////////////////////////
|
||||
// Should only be used prior to Grid Init finished.
|
||||
// Check for this?
|
||||
///////////////////////////////////////////////////////
|
||||
void CartesianCommunicator::BroadcastWorld(int root,void* data, int bytes)
|
||||
{
|
||||
int ierr= MPI_Bcast(data,
|
||||
bytes,
|
||||
MPI_BYTE,
|
||||
root,
|
||||
MPI_COMM_WORLD);
|
||||
communicator_world);
|
||||
assert(ierr==0);
|
||||
}
|
||||
|
||||
|
@ -30,25 +30,199 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
|
||||
|
||||
namespace Grid {
|
||||
|
||||
// Global used by Init and nowhere else. How to hide?
|
||||
int Rank(void) {
|
||||
int pe;
|
||||
MPI_Comm_rank(MPI_COMM_WORLD,&pe);
|
||||
return pe;
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Info that is setup once and indept of cartesian layout
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
int CartesianCommunicator::ShmSetup = 0;
|
||||
|
||||
MPI_Comm CartesianCommunicator::communicator_world;
|
||||
MPI_Comm CartesianCommunicator::ShmComm;
|
||||
MPI_Win CartesianCommunicator::ShmWindow;
|
||||
|
||||
std::vector<int> CartesianCommunicator::GroupRanks;
|
||||
std::vector<int> CartesianCommunicator::MyGroup;
|
||||
std::vector<void *> CartesianCommunicator::ShmCommBufs;
|
||||
|
||||
void *CartesianCommunicator::ShmBufferSelf(void)
|
||||
{
|
||||
return ShmCommBufs[ShmRank];
|
||||
}
|
||||
// Should error check all MPI calls.
|
||||
void *CartesianCommunicator::ShmBuffer(int rank)
|
||||
{
|
||||
int gpeer = GroupRanks[rank];
|
||||
if (gpeer == MPI_UNDEFINED){
|
||||
return NULL;
|
||||
} else {
|
||||
return ShmCommBufs[gpeer];
|
||||
}
|
||||
}
|
||||
void *CartesianCommunicator::ShmBufferTranslate(int rank,void * local_p)
|
||||
{
|
||||
int gpeer = GroupRanks[rank];
|
||||
if (gpeer == MPI_UNDEFINED){
|
||||
return NULL;
|
||||
} else {
|
||||
uint64_t offset = (uint64_t)local_p - (uint64_t)ShmCommBufs[ShmRank];
|
||||
uint64_t remote = (uint64_t)ShmCommBufs[gpeer]+offset;
|
||||
return (void *) remote;
|
||||
}
|
||||
}
|
||||
|
||||
void CartesianCommunicator::Init(int *argc, char ***argv) {
|
||||
int flag;
|
||||
MPI_Initialized(&flag); // needed to coexist with other libs apparently
|
||||
if ( !flag ) {
|
||||
MPI_Init(argc,argv);
|
||||
}
|
||||
|
||||
MPI_Comm_dup (MPI_COMM_WORLD,&communicator_world);
|
||||
MPI_Comm_rank(communicator_world,&WorldRank);
|
||||
MPI_Comm_size(communicator_world,&WorldSize);
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
// Split into groups that can share memory
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
MPI_Comm_split_type(communicator_world, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL,&ShmComm);
|
||||
MPI_Comm_rank(ShmComm ,&ShmRank);
|
||||
MPI_Comm_size(ShmComm ,&ShmSize);
|
||||
GroupSize = WorldSize/ShmSize;
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
// find world ranks in our SHM group (i.e. which ranks are on our node)
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
MPI_Group WorldGroup, ShmGroup;
|
||||
MPI_Comm_group (communicator_world, &WorldGroup);
|
||||
MPI_Comm_group (ShmComm, &ShmGroup);
|
||||
|
||||
std::vector<int> world_ranks(WorldSize);
|
||||
GroupRanks.resize(WorldSize);
|
||||
MyGroup.resize(ShmSize);
|
||||
for(int r=0;r<WorldSize;r++) world_ranks[r]=r;
|
||||
|
||||
MPI_Group_translate_ranks (WorldGroup,WorldSize,&world_ranks[0],ShmGroup, &GroupRanks[0]);
|
||||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
// Identify who is in my group and noninate the leader
|
||||
///////////////////////////////////////////////////////////////////
|
||||
int g=0;
|
||||
for(int rank=0;rank<WorldSize;rank++){
|
||||
if(GroupRanks[rank]!=MPI_UNDEFINED){
|
||||
assert(g<ShmSize);
|
||||
MyGroup[g++] = rank;
|
||||
}
|
||||
}
|
||||
|
||||
std::sort(MyGroup.begin(),MyGroup.end(),std::less<int>());
|
||||
int myleader = MyGroup[0];
|
||||
|
||||
std::vector<int> leaders_1hot(WorldSize,0);
|
||||
std::vector<int> leaders_group(GroupSize,0);
|
||||
leaders_1hot [ myleader ] = 1;
|
||||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
// global sum leaders over comm world
|
||||
///////////////////////////////////////////////////////////////////
|
||||
int ierr=MPI_Allreduce(MPI_IN_PLACE,&leaders_1hot[0],WorldSize,MPI_INT,MPI_SUM,communicator_world);
|
||||
assert(ierr==0);
|
||||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
// find the group leaders world rank
|
||||
///////////////////////////////////////////////////////////////////
|
||||
int group=0;
|
||||
for(int l=0;l<WorldSize;l++){
|
||||
if(leaders_1hot[l]){
|
||||
leaders_group[group++] = l;
|
||||
}
|
||||
}
|
||||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
// Identify the rank of the group in which I (and my leader) live
|
||||
///////////////////////////////////////////////////////////////////
|
||||
GroupRank=-1;
|
||||
for(int g=0;g<GroupSize;g++){
|
||||
if (myleader == leaders_group[g]){
|
||||
GroupRank=g;
|
||||
}
|
||||
}
|
||||
assert(GroupRank!=-1);
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// allocate the shared window for our group
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ShmCommBuf = 0;
|
||||
ierr = MPI_Win_allocate_shared(MAX_MPI_SHM_BYTES,1,MPI_INFO_NULL,ShmComm,&ShmCommBuf,&ShmWindow);
|
||||
assert(ierr==0);
|
||||
// KNL hack -- force to numa-domain 1 in flat
|
||||
#if 0
|
||||
//#include <numaif.h>
|
||||
for(uint64_t page=0;page<MAX_MPI_SHM_BYTES;page+=4096){
|
||||
void *pages = (void *) ( page + ShmCommBuf );
|
||||
int status;
|
||||
int flags=MPOL_MF_MOVE_ALL;
|
||||
int nodes=1; // numa domain == MCDRAM
|
||||
unsigned long count=1;
|
||||
ierr= move_pages(0,count, &pages,&nodes,&status,flags);
|
||||
if (ierr && (page==0)) perror("numa relocate command failed");
|
||||
}
|
||||
#endif
|
||||
MPI_Win_lock_all (MPI_MODE_NOCHECK, ShmWindow);
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Plan: allocate a fixed SHM region. Scratch that is just used via some scheme during stencil comms, with no allocate free.
|
||||
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
ShmCommBufs.resize(ShmSize);
|
||||
for(int r=0;r<ShmSize;r++){
|
||||
MPI_Aint sz;
|
||||
int dsp_unit;
|
||||
MPI_Win_shared_query (ShmWindow, r, &sz, &dsp_unit, &ShmCommBufs[r]);
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Verbose for now
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
if (WorldRank == 0){
|
||||
std::cout<<GridLogMessage<< "Grid MPI-3 configuration: detected ";
|
||||
std::cout<< WorldSize << " Ranks " ;
|
||||
std::cout<< GroupSize << " Nodes " ;
|
||||
std::cout<< ShmSize << " with ranks-per-node "<<std::endl;
|
||||
|
||||
std::cout<<GridLogMessage <<"Grid MPI-3 configuration: allocated shared memory region of size ";
|
||||
std::cout<<std::hex << MAX_MPI_SHM_BYTES <<" ShmCommBuf address = "<<ShmCommBuf << std::dec<<std::endl;
|
||||
|
||||
for(int g=0;g<GroupSize;g++){
|
||||
std::cout<<GridLogMessage<<" Node "<<g<<" led by MPI rank "<<leaders_group[g]<<std::endl;
|
||||
}
|
||||
|
||||
std::cout<<GridLogMessage<<" Boss Node Shm Pointers are {";
|
||||
for(int g=0;g<ShmSize;g++){
|
||||
std::cout<<std::hex<<ShmCommBufs[g]<<std::dec;
|
||||
if(g!=ShmSize-1) std::cout<<",";
|
||||
else std::cout<<"}"<<std::endl;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
for(int g=0;g<GroupSize;g++){
|
||||
if ( (ShmRank == 0) && (GroupRank==g) ) std::cout<<GridLogMessage<<"["<<g<<"] Node Group "<<g<<" is ranks {";
|
||||
for(int r=0;r<ShmSize;r++){
|
||||
if ( (ShmRank == 0) && (GroupRank==g) ) {
|
||||
std::cout<<MyGroup[r];
|
||||
if(r<ShmSize-1) std::cout<<",";
|
||||
else std::cout<<"}"<<std::endl;
|
||||
}
|
||||
MPI_Barrier(communicator_world);
|
||||
}
|
||||
}
|
||||
|
||||
assert(ShmSetup==0); ShmSetup=1;
|
||||
}
|
||||
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Want to implement some magic ... Group sub-cubes into those on same node
|
||||
//
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void CartesianCommunicator::ShiftedRanks(int dim,int shift,int &source,int &dest)
|
||||
{
|
||||
std::vector<int> coor = _processor_coor;
|
||||
@ -78,27 +252,11 @@ void CartesianCommunicator::ProcessorCoorFromRank(int rank, std::vector<int> &c
|
||||
|
||||
CartesianCommunicator::CartesianCommunicator(const std::vector<int> &processors)
|
||||
{
|
||||
int ierr;
|
||||
|
||||
communicator=communicator_world;
|
||||
|
||||
_ndimension = processors.size();
|
||||
std::cout << "Creating "<< _ndimension << " dim communicator "<<std::endl;
|
||||
for(int d =0;d<_ndimension;d++){
|
||||
std::cout << processors[d]<<" ";
|
||||
};
|
||||
std::cout << std::endl;
|
||||
|
||||
WorldDims = processors;
|
||||
|
||||
communicator = MPI_COMM_WORLD;
|
||||
MPI_Comm shmcomm;
|
||||
MPI_Comm_split_type(communicator, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL,&shmcomm);
|
||||
MPI_Comm_rank(communicator,&WorldRank);
|
||||
MPI_Comm_size(communicator,&WorldSize);
|
||||
MPI_Comm_rank(shmcomm ,&ShmRank);
|
||||
MPI_Comm_size(shmcomm ,&ShmSize);
|
||||
GroupSize = WorldSize/ShmSize;
|
||||
|
||||
std::cout<< "Ranks per node "<< ShmSize << std::endl;
|
||||
std::cout<< "Nodes "<< GroupSize << std::endl;
|
||||
std::cout<< "Ranks "<< WorldSize << std::endl;
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
// Assert power of two shm_size.
|
||||
@ -118,46 +276,27 @@ CartesianCommunicator::CartesianCommunicator(const std::vector<int> &processors)
|
||||
////////////////////////////////////////////////////////////////
|
||||
int dim = 0;
|
||||
|
||||
std::vector<int> WorldDims = processors;
|
||||
|
||||
ShmDims.resize(_ndimension,1);
|
||||
GroupDims.resize(_ndimension);
|
||||
|
||||
ShmCoor.resize(_ndimension);
|
||||
GroupCoor.resize(_ndimension);
|
||||
WorldCoor.resize(_ndimension);
|
||||
|
||||
for(int l2=0;l2<log2size;l2++){
|
||||
while ( WorldDims[dim] / ShmDims[dim] <= 1 ) dim=(dim+1)%_ndimension;
|
||||
ShmDims[dim]*=2;
|
||||
dim=(dim+1)%_ndimension;
|
||||
}
|
||||
|
||||
std::cout << "Shm group dims "<<std::endl;
|
||||
for(int d =0;d<_ndimension;d++){
|
||||
std::cout << ShmDims[d]<<" ";
|
||||
};
|
||||
std::cout << std::endl;
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
// Establish torus of processes and nodes with sub-blockings
|
||||
////////////////////////////////////////////////////////////////
|
||||
for(int d=0;d<_ndimension;d++){
|
||||
GroupDims[d] = WorldDims[d]/ShmDims[d];
|
||||
}
|
||||
std::cout << "Group dims "<<std::endl;
|
||||
for(int d =0;d<_ndimension;d++){
|
||||
std::cout << GroupDims[d]<<" ";
|
||||
};
|
||||
std::cout << std::endl;
|
||||
|
||||
MPI_Group WorldGroup, ShmGroup;
|
||||
MPI_Comm_group (communicator, &WorldGroup);
|
||||
MPI_Comm_group (shmcomm, &ShmGroup);
|
||||
|
||||
std::vector<int> world_ranks(WorldSize);
|
||||
std::vector<int> group_ranks(WorldSize);
|
||||
std::vector<int> mygroup(GroupSize);
|
||||
for(int r=0;r<WorldSize;r++) world_ranks[r]=r;
|
||||
|
||||
MPI_Group_translate_ranks (WorldGroup,WorldSize,&world_ranks[0],ShmGroup, &group_ranks[0]);
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
// Check processor counts match
|
||||
@ -166,56 +305,10 @@ CartesianCommunicator::CartesianCommunicator(const std::vector<int> &processors)
|
||||
_processors = processors;
|
||||
_processor_coor.resize(_ndimension);
|
||||
for(int i=0;i<_ndimension;i++){
|
||||
std::cout << " p " << _processors[i]<<std::endl;
|
||||
_Nprocessors*=_processors[i];
|
||||
}
|
||||
std::cout << " World " <<WorldSize <<" Nproc "<<_Nprocessors<<std::endl;
|
||||
assert(WorldSize==_Nprocessors);
|
||||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
// Identify who is in my group and noninate the leader
|
||||
///////////////////////////////////////////////////////////////////
|
||||
int g=0;
|
||||
for(int rank=0;rank<WorldSize;rank++){
|
||||
if(group_ranks[rank]!=MPI_UNDEFINED){
|
||||
mygroup[g] = rank;
|
||||
}
|
||||
}
|
||||
|
||||
std::sort(mygroup.begin(),mygroup.end(),std::greater<int>());
|
||||
int myleader = mygroup[0];
|
||||
|
||||
std::vector<int> leaders_1hot(WorldSize,0);
|
||||
std::vector<int> leaders_group(GroupSize,0);
|
||||
leaders_1hot [ myleader ] = 1;
|
||||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
// global sum leaders over comm world
|
||||
///////////////////////////////////////////////////////////////////
|
||||
int ierr=MPI_Allreduce(MPI_IN_PLACE,&leaders_1hot[0],WorldSize,MPI_INT,MPI_SUM,communicator);
|
||||
assert(ierr==0);
|
||||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
// find the group leaders world rank
|
||||
///////////////////////////////////////////////////////////////////
|
||||
int group=0;
|
||||
for(int l=0;l<WorldSize;l++){
|
||||
if(leaders_1hot[l]){
|
||||
leaders_group[group++] = l;
|
||||
}
|
||||
}
|
||||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
// Identify the rank of the group in which I (and my leader) live
|
||||
///////////////////////////////////////////////////////////////////
|
||||
GroupRank=-1;
|
||||
for(int g=0;g<GroupSize;g++){
|
||||
if (myleader == leaders_group[g]){
|
||||
GroupRank=g;
|
||||
}
|
||||
}
|
||||
assert(GroupRank!=-1);
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
// Establish mapping between lexico physics coord and WorldRank
|
||||
//
|
||||
@ -307,6 +400,80 @@ void CartesianCommunicator::SendToRecvFromBegin(std::vector<CommsRequest_t> &lis
|
||||
int from,
|
||||
int bytes)
|
||||
{
|
||||
#if 0
|
||||
this->StencilBarrier();
|
||||
|
||||
MPI_Request xrq;
|
||||
MPI_Request rrq;
|
||||
|
||||
static int sequence;
|
||||
|
||||
int ierr;
|
||||
int tag;
|
||||
int check;
|
||||
|
||||
assert(dest != _processor);
|
||||
assert(from != _processor);
|
||||
|
||||
int gdest = GroupRanks[dest];
|
||||
int gfrom = GroupRanks[from];
|
||||
int gme = GroupRanks[_processor];
|
||||
|
||||
sequence++;
|
||||
|
||||
char *from_ptr = (char *)ShmCommBufs[ShmRank];
|
||||
|
||||
int small = (bytes<MAX_MPI_SHM_BYTES);
|
||||
|
||||
typedef uint64_t T;
|
||||
int words = bytes/sizeof(T);
|
||||
|
||||
assert(((size_t)bytes &(sizeof(T)-1))==0);
|
||||
assert(gme == ShmRank);
|
||||
|
||||
if ( small && (gdest !=MPI_UNDEFINED) ) {
|
||||
|
||||
char *to_ptr = (char *)ShmCommBufs[gdest];
|
||||
|
||||
assert(gme != gdest);
|
||||
|
||||
T *ip = (T *)xmit;
|
||||
T *op = (T *)to_ptr;
|
||||
PARALLEL_FOR_LOOP
|
||||
for(int w=0;w<words;w++) {
|
||||
op[w]=ip[w];
|
||||
}
|
||||
|
||||
bcopy(&_processor,&to_ptr[bytes],sizeof(_processor));
|
||||
bcopy(& sequence,&to_ptr[bytes+4],sizeof(sequence));
|
||||
} else {
|
||||
ierr =MPI_Isend(xmit, bytes, MPI_CHAR,dest,_processor,communicator,&xrq);
|
||||
assert(ierr==0);
|
||||
list.push_back(xrq);
|
||||
}
|
||||
|
||||
this->StencilBarrier();
|
||||
|
||||
if (small && (gfrom !=MPI_UNDEFINED) ) {
|
||||
T *ip = (T *)from_ptr;
|
||||
T *op = (T *)recv;
|
||||
PARALLEL_FOR_LOOP
|
||||
for(int w=0;w<words;w++) {
|
||||
op[w]=ip[w];
|
||||
}
|
||||
bcopy(&from_ptr[bytes] ,&tag ,sizeof(tag));
|
||||
bcopy(&from_ptr[bytes+4],&check,sizeof(check));
|
||||
assert(check==sequence);
|
||||
assert(tag==from);
|
||||
} else {
|
||||
ierr=MPI_Irecv(recv, bytes, MPI_CHAR,from,from,communicator,&rrq);
|
||||
assert(ierr==0);
|
||||
list.push_back(rrq);
|
||||
}
|
||||
|
||||
this->StencilBarrier();
|
||||
|
||||
#else
|
||||
MPI_Request xrq;
|
||||
MPI_Request rrq;
|
||||
int rank = _processor;
|
||||
@ -318,13 +485,62 @@ void CartesianCommunicator::SendToRecvFromBegin(std::vector<CommsRequest_t> &lis
|
||||
|
||||
list.push_back(xrq);
|
||||
list.push_back(rrq);
|
||||
#endif
|
||||
}
|
||||
|
||||
void CartesianCommunicator::StencilSendToRecvFromBegin(std::vector<CommsRequest_t> &list,
|
||||
void *xmit,
|
||||
int dest,
|
||||
void *recv,
|
||||
int from,
|
||||
int bytes)
|
||||
{
|
||||
MPI_Request xrq;
|
||||
MPI_Request rrq;
|
||||
|
||||
int ierr;
|
||||
|
||||
assert(dest != _processor);
|
||||
assert(from != _processor);
|
||||
|
||||
int gdest = GroupRanks[dest];
|
||||
int gfrom = GroupRanks[from];
|
||||
int gme = GroupRanks[_processor];
|
||||
|
||||
assert(gme == ShmRank);
|
||||
|
||||
if ( gdest == MPI_UNDEFINED ) {
|
||||
ierr =MPI_Isend(xmit, bytes, MPI_CHAR,dest,_processor,communicator,&xrq);
|
||||
assert(ierr==0);
|
||||
list.push_back(xrq);
|
||||
}
|
||||
|
||||
if ( gfrom ==MPI_UNDEFINED) {
|
||||
ierr=MPI_Irecv(recv, bytes, MPI_CHAR,from,from,communicator,&rrq);
|
||||
assert(ierr==0);
|
||||
list.push_back(rrq);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
void CartesianCommunicator::StencilSendToRecvFromComplete(std::vector<CommsRequest_t> &list)
|
||||
{
|
||||
SendToRecvFromComplete(list);
|
||||
}
|
||||
|
||||
void CartesianCommunicator::StencilBarrier(void)
|
||||
{
|
||||
MPI_Win_sync (ShmWindow);
|
||||
MPI_Barrier (ShmComm);
|
||||
MPI_Win_sync (ShmWindow);
|
||||
}
|
||||
|
||||
void CartesianCommunicator::SendToRecvFromComplete(std::vector<CommsRequest_t> &list)
|
||||
{
|
||||
int nreq=list.size();
|
||||
std::vector<MPI_Status> status(nreq);
|
||||
int ierr = MPI_Waitall(nreq,&list[0],&status[0]);
|
||||
|
||||
assert(ierr==0);
|
||||
}
|
||||
|
||||
@ -350,7 +566,7 @@ void CartesianCommunicator::BroadcastWorld(int root,void* data, int bytes)
|
||||
bytes,
|
||||
MPI_BYTE,
|
||||
root,
|
||||
MPI_COMM_WORLD);
|
||||
communicator_world);
|
||||
assert(ierr==0);
|
||||
}
|
||||
|
||||
|
@ -28,12 +28,22 @@ Author: Peter Boyle <paboyle@ph.ed.ac.uk>
|
||||
#include "Grid.h"
|
||||
namespace Grid {
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Info that is setup once and indept of cartesian layout
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
void CartesianCommunicator::Init(int *argc, char *** arv)
|
||||
{
|
||||
WorldRank = 0;
|
||||
WorldSize = 1;
|
||||
ShmRank=0;
|
||||
ShmSize=1;
|
||||
GroupRank=WorldRank;
|
||||
GroupSize=WorldSize;
|
||||
Slave =0;
|
||||
ShmInitGeneric();
|
||||
}
|
||||
|
||||
int Rank(void ){ return 0; };
|
||||
|
||||
CartesianCommunicator::CartesianCommunicator(const std::vector<int> &processors)
|
||||
{
|
||||
_processors = processors;
|
||||
@ -89,30 +99,16 @@ void CartesianCommunicator::SendToRecvFromComplete(std::vector<CommsRequest_t> &
|
||||
assert(0);
|
||||
}
|
||||
|
||||
void CartesianCommunicator::Barrier(void)
|
||||
{
|
||||
}
|
||||
|
||||
void CartesianCommunicator::Broadcast(int root,void* data, int bytes)
|
||||
{
|
||||
}
|
||||
void CartesianCommunicator::BroadcastWorld(int root,void* data, int bytes)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void CartesianCommunicator::Barrier(void){}
|
||||
void CartesianCommunicator::Broadcast(int root,void* data, int bytes) {}
|
||||
void CartesianCommunicator::BroadcastWorld(int root,void* data, int bytes) { }
|
||||
int CartesianCommunicator::RankFromProcessorCoor(std::vector<int> &coor) { return 0;}
|
||||
void CartesianCommunicator::ProcessorCoorFromRank(int rank, std::vector<int> &coor){ assert(0);}
|
||||
void CartesianCommunicator::ShiftedRanks(int dim,int shift,int &source,int &dest)
|
||||
{
|
||||
source =0;
|
||||
dest=0;
|
||||
}
|
||||
int CartesianCommunicator::RankFromProcessorCoor(std::vector<int> &coor)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
void CartesianCommunicator::ProcessorCoorFromRank(int rank, std::vector<int> &coor)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
@ -39,17 +39,22 @@ namespace Grid {
|
||||
BACKTRACEFILE(); \
|
||||
}\
|
||||
}
|
||||
int Rank(void) {
|
||||
return shmem_my_pe();
|
||||
}
|
||||
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Info that is setup once and indept of cartesian layout
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
typedef struct HandShake_t {
|
||||
uint64_t seq_local;
|
||||
uint64_t seq_remote;
|
||||
} HandShake;
|
||||
|
||||
|
||||
static Vector< HandShake > XConnections;
|
||||
static Vector< HandShake > RConnections;
|
||||
|
||||
|
||||
void CartesianCommunicator::Init(int *argc, char ***argv) {
|
||||
shmem_init();
|
||||
XConnections.resize(shmem_n_pes());
|
||||
@ -60,8 +65,17 @@ void CartesianCommunicator::Init(int *argc, char ***argv) {
|
||||
RConnections[pe].seq_local = 0;
|
||||
RConnections[pe].seq_remote= 0;
|
||||
}
|
||||
WorldSize = shmem_n_pes();
|
||||
WorldRank = shmem_my_pe();
|
||||
ShmRank=0;
|
||||
ShmSize=1;
|
||||
GroupRank=WorldRank;
|
||||
GroupSize=WorldSize;
|
||||
Slave =0;
|
||||
shmem_barrier_all();
|
||||
ShmInitGeneric();
|
||||
}
|
||||
|
||||
CartesianCommunicator::CartesianCommunicator(const std::vector<int> &processors)
|
||||
{
|
||||
_ndimension = processors.size();
|
||||
@ -230,12 +244,9 @@ void CartesianCommunicator::SendRecvPacket(void *xmit,
|
||||
|
||||
if ( _processor == sender ) {
|
||||
|
||||
printf("Sender SHMEM pt2pt %d -> %d\n",sender,receiver);
|
||||
// Check he has posted a receive
|
||||
while(SendSeq->seq_remote == SendSeq->seq_local);
|
||||
|
||||
printf("Sender receive %d posted\n",sender,receiver);
|
||||
|
||||
// Advance our send count
|
||||
seq = ++(SendSeq->seq_local);
|
||||
|
||||
@ -244,26 +255,19 @@ void CartesianCommunicator::SendRecvPacket(void *xmit,
|
||||
shmem_putmem(recv,xmit,bytes,receiver);
|
||||
shmem_fence();
|
||||
|
||||
printf("Sender sent payload %d\n",seq);
|
||||
//Notify him we're done
|
||||
shmem_putmem((void *)&(RecvSeq->seq_remote),&seq,sizeof(seq),receiver);
|
||||
shmem_fence();
|
||||
printf("Sender ringing door bell %d\n",seq);
|
||||
}
|
||||
if ( _processor == receiver ) {
|
||||
|
||||
printf("Receiver SHMEM pt2pt %d->%d\n",sender,receiver);
|
||||
// Post a receive
|
||||
seq = ++(RecvSeq->seq_local);
|
||||
shmem_putmem((void *)&(SendSeq->seq_remote),&seq,sizeof(seq),sender);
|
||||
|
||||
printf("Receiver Opening letter box %d\n",seq);
|
||||
|
||||
|
||||
// Now wait until he has advanced our reception counter
|
||||
while(RecvSeq->seq_remote != RecvSeq->seq_local);
|
||||
|
||||
printf("Receiver Got the mail %d\n",seq);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -164,15 +164,17 @@ PARALLEL_FOR_LOOP
|
||||
assert( l.checkerboard== l._grid->CheckerBoard(site));
|
||||
assert( sizeof(sobj)*Nsimd == sizeof(vobj));
|
||||
|
||||
static const int words=sizeof(vobj)/sizeof(vector_type);
|
||||
int odx,idx;
|
||||
idx= grid->iIndex(site);
|
||||
odx= grid->oIndex(site);
|
||||
|
||||
std::vector<sobj> buf(Nsimd);
|
||||
scalar_type * vp = (scalar_type *)&l._odata[odx];
|
||||
scalar_type * pt = (scalar_type *)&s;
|
||||
|
||||
extract(l._odata[odx],buf);
|
||||
|
||||
s = buf[idx];
|
||||
for(int w=0;w<words;w++){
|
||||
pt[w] = vp[idx+w*Nsimd];
|
||||
}
|
||||
|
||||
return;
|
||||
};
|
||||
@ -190,18 +192,17 @@ PARALLEL_FOR_LOOP
|
||||
assert( l.checkerboard== l._grid->CheckerBoard(site));
|
||||
assert( sizeof(sobj)*Nsimd == sizeof(vobj));
|
||||
|
||||
static const int words=sizeof(vobj)/sizeof(vector_type);
|
||||
int odx,idx;
|
||||
idx= grid->iIndex(site);
|
||||
odx= grid->oIndex(site);
|
||||
|
||||
std::vector<sobj> buf(Nsimd);
|
||||
scalar_type * vp = (scalar_type *)&l._odata[odx];
|
||||
scalar_type * pt = (scalar_type *)&s;
|
||||
|
||||
// extract-modify-merge cycle is easiest way and this is not perf critical
|
||||
extract(l._odata[odx],buf);
|
||||
|
||||
buf[idx] = s;
|
||||
|
||||
merge(l._odata[odx],buf);
|
||||
for(int w=0;w<words;w++){
|
||||
vp[idx+w*Nsimd] = pt[w];
|
||||
}
|
||||
|
||||
return;
|
||||
};
|
||||
|
@ -297,8 +297,9 @@ namespace Grid {
|
||||
|
||||
int l_idx=generator_idx(o_idx,i_idx);
|
||||
|
||||
std::vector<int> site_seeds(4);
|
||||
for(int i=0;i<4;i++){
|
||||
const int num_rand_seed=16;
|
||||
std::vector<int> site_seeds(num_rand_seed);
|
||||
for(int i=0;i<site_seeds.size();i++){
|
||||
site_seeds[i]= ui(pseeder);
|
||||
}
|
||||
|
||||
|
@ -33,7 +33,6 @@ directory
|
||||
#define GRID_QCD_FERMION_OPERATOR_IMPL_H
|
||||
|
||||
namespace Grid {
|
||||
|
||||
namespace QCD {
|
||||
|
||||
|
||||
@ -108,13 +107,14 @@ namespace Grid {
|
||||
INHERIT_GIMPL_TYPES(Base) \
|
||||
INHERIT_FIMPL_TYPES(Base)
|
||||
|
||||
///////
|
||||
/////////////////////////////////////////////////////////////////////////////
|
||||
// Single flavour four spinors with colour index
|
||||
///////
|
||||
/////////////////////////////////////////////////////////////////////////////
|
||||
template <class S, class Representation = FundamentalRepresentation,class _Coeff_t = RealD >
|
||||
class WilsonImpl
|
||||
: public PeriodicGaugeImpl<GaugeImplTypes<S, Representation::Dimension > > {
|
||||
class WilsonImpl : public PeriodicGaugeImpl<GaugeImplTypes<S, Representation::Dimension > > {
|
||||
|
||||
public:
|
||||
|
||||
static const int Dimension = Representation::Dimension;
|
||||
typedef PeriodicGaugeImpl<GaugeImplTypes<S, Dimension > > Gimpl;
|
||||
|
||||
@ -124,7 +124,6 @@ namespace Grid {
|
||||
const bool LsVectorised=false;
|
||||
typedef _Coeff_t Coeff_t;
|
||||
|
||||
|
||||
INHERIT_GIMPL_TYPES(Gimpl);
|
||||
|
||||
template <typename vtype> using iImplSpinor = iScalar<iVector<iVector<vtype, Dimension>, Ns> >;
|
||||
@ -158,8 +157,7 @@ namespace Grid {
|
||||
}
|
||||
|
||||
template <class ref>
|
||||
inline void loadLinkElement(Simd ®,
|
||||
ref &memory) {
|
||||
inline void loadLinkElement(Simd ®, ref &memory) {
|
||||
reg = memory;
|
||||
}
|
||||
|
||||
@ -202,9 +200,10 @@ namespace Grid {
|
||||
}
|
||||
};
|
||||
|
||||
///////
|
||||
////////////////////////////////////////////////////////////////////////////////////
|
||||
// Single flavour four spinors with colour index, 5d redblack
|
||||
///////
|
||||
////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
template<class S,int Nrepresentation=Nc,class _Coeff_t = RealD>
|
||||
class DomainWallVec5dImpl : public PeriodicGaugeImpl< GaugeImplTypes< S,Nrepresentation> > {
|
||||
public:
|
||||
@ -227,12 +226,9 @@ namespace Grid {
|
||||
typedef Lattice<SiteSpinor> FermionField;
|
||||
|
||||
// Make the doubled gauge field a *scalar*
|
||||
typedef iImplDoubledGaugeField<typename Simd::scalar_type>
|
||||
SiteDoubledGaugeField; // This is a scalar
|
||||
typedef iImplGaugeField<typename Simd::scalar_type>
|
||||
SiteScalarGaugeField; // scalar
|
||||
typedef iImplGaugeLink<typename Simd::scalar_type>
|
||||
SiteScalarGaugeLink; // scalar
|
||||
typedef iImplDoubledGaugeField<typename Simd::scalar_type> SiteDoubledGaugeField; // This is a scalar
|
||||
typedef iImplGaugeField<typename Simd::scalar_type> SiteScalarGaugeField; // scalar
|
||||
typedef iImplGaugeLink<typename Simd::scalar_type> SiteScalarGaugeLink; // scalar
|
||||
|
||||
typedef Lattice<SiteDoubledGaugeField> DoubledGaugeField;
|
||||
|
||||
@ -250,6 +246,7 @@ namespace Grid {
|
||||
inline void loadLinkElement(Simd ®, ref &memory) {
|
||||
vsplat(reg, memory);
|
||||
}
|
||||
|
||||
inline void multLink(SiteHalfSpinor &phi, const SiteDoubledGaugeField &U,
|
||||
const SiteHalfSpinor &chi, int mu, StencilEntry *SE,
|
||||
StencilImpl &St) {
|
||||
@ -262,8 +259,8 @@ namespace Grid {
|
||||
mult(&phi(), &UU(), &chi());
|
||||
}
|
||||
|
||||
inline void DoubleStore(GridBase *GaugeGrid, DoubledGaugeField &Uds,
|
||||
const GaugeField &Umu) {
|
||||
inline void DoubleStore(GridBase *GaugeGrid, DoubledGaugeField &Uds,const GaugeField &Umu)
|
||||
{
|
||||
SiteScalarGaugeField ScalarUmu;
|
||||
SiteDoubledGaugeField ScalarUds;
|
||||
|
||||
@ -289,13 +286,13 @@ namespace Grid {
|
||||
}
|
||||
}
|
||||
|
||||
inline void InsertForce4D(GaugeField &mat, FermionField &Btilde,
|
||||
FermionField &A, int mu) {
|
||||
inline void InsertForce4D(GaugeField &mat, FermionField &Btilde,FermionField &A, int mu)
|
||||
{
|
||||
assert(0);
|
||||
}
|
||||
|
||||
inline void InsertForce5D(GaugeField &mat, FermionField &Btilde,
|
||||
FermionField Ã, int mu) {
|
||||
inline void InsertForce5D(GaugeField &mat, FermionField &Btilde,FermionField Ã, int mu)
|
||||
{
|
||||
assert(0);
|
||||
}
|
||||
};
|
||||
@ -305,9 +302,9 @@ namespace Grid {
|
||||
////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
template <class S, int Nrepresentation,class _Coeff_t = RealD>
|
||||
class GparityWilsonImpl
|
||||
: public ConjugateGaugeImpl<GaugeImplTypes<S, Nrepresentation> > {
|
||||
class GparityWilsonImpl : public ConjugateGaugeImpl<GaugeImplTypes<S, Nrepresentation> > {
|
||||
public:
|
||||
|
||||
static const int Dimension = Nrepresentation;
|
||||
|
||||
const bool LsVectorised=false;
|
||||
@ -317,15 +314,9 @@ namespace Grid {
|
||||
|
||||
INHERIT_GIMPL_TYPES(Gimpl);
|
||||
|
||||
template <typename vtype>
|
||||
using iImplSpinor =
|
||||
iVector<iVector<iVector<vtype, Nrepresentation>, Ns>, Ngp>;
|
||||
template <typename vtype>
|
||||
using iImplHalfSpinor =
|
||||
iVector<iVector<iVector<vtype, Nrepresentation>, Nhs>, Ngp>;
|
||||
template <typename vtype>
|
||||
using iImplDoubledGaugeField =
|
||||
iVector<iVector<iScalar<iMatrix<vtype, Nrepresentation> >, Nds>, Ngp>;
|
||||
template <typename vtype> using iImplSpinor = iVector<iVector<iVector<vtype, Nrepresentation>, Ns>, Ngp>;
|
||||
template <typename vtype> using iImplHalfSpinor = iVector<iVector<iVector<vtype, Nrepresentation>, Nhs>, Ngp>;
|
||||
template <typename vtype> using iImplDoubledGaugeField = iVector<iVector<iScalar<iMatrix<vtype, Nrepresentation> >, Nds>, Ngp>;
|
||||
|
||||
typedef iImplSpinor<Simd> SiteSpinor;
|
||||
typedef iImplHalfSpinor<Simd> SiteHalfSpinor;
|
||||
@ -341,7 +332,6 @@ namespace Grid {
|
||||
|
||||
ImplParams Params;
|
||||
|
||||
|
||||
GparityWilsonImpl(const ImplParams &p = ImplParams()) : Params(p){};
|
||||
|
||||
bool overlapCommsCompute(void) { return Params.overlapCommsCompute; };
|
||||
@ -351,6 +341,7 @@ namespace Grid {
|
||||
inline void multLink(SiteHalfSpinor &phi, const SiteDoubledGaugeField &U,
|
||||
const SiteHalfSpinor &chi, int mu, StencilEntry *SE,
|
||||
StencilImpl &St) {
|
||||
|
||||
typedef SiteHalfSpinor vobj;
|
||||
typedef typename SiteHalfSpinor::scalar_object sobj;
|
||||
|
||||
@ -419,7 +410,6 @@ namespace Grid {
|
||||
|
||||
inline void DoubleStore(GridBase *GaugeGrid,DoubledGaugeField &Uds,const GaugeField &Umu)
|
||||
{
|
||||
|
||||
conformable(Uds._grid,GaugeGrid);
|
||||
conformable(Umu._grid,GaugeGrid);
|
||||
|
||||
@ -429,7 +419,6 @@ namespace Grid {
|
||||
|
||||
Lattice<iScalar<vInteger> > coor(GaugeGrid);
|
||||
|
||||
|
||||
for(int mu=0;mu<Nd;mu++){
|
||||
|
||||
LatticeCoordinate(coor,mu);
|
||||
@ -443,7 +432,6 @@ namespace Grid {
|
||||
Uconj = where(coor==neglink,-Uconj,Uconj);
|
||||
}
|
||||
|
||||
|
||||
PARALLEL_FOR_LOOP
|
||||
for(auto ss=U.begin();ss<U.end();ss++){
|
||||
Uds[ss](0)(mu) = U[ss]();
|
||||
@ -477,8 +465,8 @@ namespace Grid {
|
||||
}
|
||||
|
||||
|
||||
inline void InsertForce4D(GaugeField &mat, FermionField &Btilde,
|
||||
FermionField &A, int mu) {
|
||||
inline void InsertForce4D(GaugeField &mat, FermionField &Btilde, FermionField &A, int mu) {
|
||||
|
||||
// DhopDir provides U or Uconj depending on coor/flavour.
|
||||
GaugeLinkField link(mat._grid);
|
||||
// use lorentz for flavour as hack.
|
||||
@ -491,8 +479,8 @@ namespace Grid {
|
||||
return;
|
||||
}
|
||||
|
||||
inline void InsertForce5D(GaugeField &mat, FermionField &Btilde,
|
||||
FermionField Ã, int mu) {
|
||||
inline void InsertForce5D(GaugeField &mat, FermionField &Btilde, FermionField Ã, int mu) {
|
||||
|
||||
int Ls = Btilde._grid->_fdimensions[0];
|
||||
|
||||
GaugeLinkField tmp(mat._grid);
|
||||
@ -508,13 +496,13 @@ namespace Grid {
|
||||
PokeIndex<LorentzIndex>(mat, tmp, mu);
|
||||
return;
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
typedef WilsonImpl<vComplex, FundamentalRepresentation > WilsonImplR; // Real.. whichever prec
|
||||
typedef WilsonImpl<vComplexF, FundamentalRepresentation > WilsonImplF; // Float
|
||||
typedef WilsonImpl<vComplexD, FundamentalRepresentation > WilsonImplD; // Double
|
||||
|
||||
|
||||
typedef WilsonImpl<vComplex, FundamentalRepresentation, ComplexD > ZWilsonImplR; // Real.. whichever prec
|
||||
typedef WilsonImpl<vComplexF, FundamentalRepresentation, ComplexD > ZWilsonImplF; // Float
|
||||
typedef WilsonImpl<vComplexD, FundamentalRepresentation, ComplexD > ZWilsonImplD; // Double
|
||||
@ -538,6 +526,7 @@ namespace Grid {
|
||||
typedef GparityWilsonImpl<vComplex , Nc> GparityWilsonImplR; // Real.. whichever prec
|
||||
typedef GparityWilsonImpl<vComplexF, Nc> GparityWilsonImplF; // Float
|
||||
typedef GparityWilsonImpl<vComplexD, Nc> GparityWilsonImplD; // Double
|
||||
}
|
||||
}
|
||||
|
||||
}}
|
||||
|
||||
#endif
|
||||
|
@ -222,7 +222,7 @@ void WilsonFermion<Impl>::DerivInternal(StencilImpl &st, DoubledGaugeField &U,
|
||||
////////////////////////
|
||||
PARALLEL_FOR_LOOP
|
||||
for (int sss = 0; sss < B._grid->oSites(); sss++) {
|
||||
Kernels::DiracOptDhopDir(st, U, st.comm_buf, sss, sss, B, Btilde, mu,
|
||||
Kernels::DiracOptDhopDir(st, U, st.CommBuf(), sss, sss, B, Btilde, mu,
|
||||
gamma);
|
||||
}
|
||||
|
||||
@ -333,7 +333,7 @@ void WilsonFermion<Impl>::DhopDirDisp(const FermionField &in, FermionField &out,
|
||||
|
||||
PARALLEL_FOR_LOOP
|
||||
for (int sss = 0; sss < in._grid->oSites(); sss++) {
|
||||
Kernels::DiracOptDhopDir(Stencil, Umu, Stencil.comm_buf, sss, sss, in, out,
|
||||
Kernels::DiracOptDhopDir(Stencil, Umu, Stencil.CommBuf(), sss, sss, in, out,
|
||||
dirdisp, gamma);
|
||||
}
|
||||
};
|
||||
@ -351,13 +351,13 @@ void WilsonFermion<Impl>::DhopInternal(StencilImpl &st, LebesgueOrder &lo,
|
||||
if (dag == DaggerYes) {
|
||||
PARALLEL_FOR_LOOP
|
||||
for (int sss = 0; sss < in._grid->oSites(); sss++) {
|
||||
Kernels::DiracOptDhopSiteDag(st, lo, U, st.comm_buf, sss, sss, 1, 1, in,
|
||||
Kernels::DiracOptDhopSiteDag(st, lo, U, st.CommBuf(), sss, sss, 1, 1, in,
|
||||
out);
|
||||
}
|
||||
} else {
|
||||
PARALLEL_FOR_LOOP
|
||||
for (int sss = 0; sss < in._grid->oSites(); sss++) {
|
||||
Kernels::DiracOptDhopSite(st, lo, U, st.comm_buf, sss, sss, 1, 1, in,
|
||||
Kernels::DiracOptDhopSite(st, lo, U, st.CommBuf(), sss, sss, 1, 1, in,
|
||||
out);
|
||||
}
|
||||
}
|
||||
|
@ -185,18 +185,14 @@ void WilsonFermion5D<Impl>::Report(void)
|
||||
if ( DhopCalls > 0 ) {
|
||||
std::cout << GridLogMessage << "#### Dhop calls report " << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D Number of Dhop Calls : " << DhopCalls << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D Total Communication time : " << DhopCommTime
|
||||
<< " us" << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D CommTime/Calls : "
|
||||
<< DhopCommTime / DhopCalls << " us" << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D Total Compute time : "
|
||||
<< DhopComputeTime << " us" << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D ComputeTime/Calls : "
|
||||
<< DhopComputeTime / DhopCalls << " us" << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D Total Communication time : " << DhopCommTime<< " us" << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D CommTime/Calls : " << DhopCommTime / DhopCalls << " us" << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D Total Compute time : " << DhopComputeTime << " us" << std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D ComputeTime/Calls : " << DhopComputeTime / DhopCalls << " us" << std::endl;
|
||||
|
||||
RealD mflops = 1344*volume*DhopCalls/DhopComputeTime;
|
||||
RealD mflops = 1344*volume*DhopCalls/DhopComputeTime/2; // 2 for red black counting
|
||||
std::cout << GridLogMessage << "Average mflops/s per call : " << mflops << std::endl;
|
||||
std::cout << GridLogMessage << "Average mflops/s per call per node : " << mflops/NP << std::endl;
|
||||
std::cout << GridLogMessage << "Average mflops/s per call per rank : " << mflops/NP << std::endl;
|
||||
|
||||
}
|
||||
|
||||
@ -210,12 +206,9 @@ void WilsonFermion5D<Impl>::Report(void)
|
||||
std::cout << GridLogMessage << "WilsonFermion5D Total Dhop Compute time : " <<DerivDhopComputeTime <<" us"<<std::endl;
|
||||
std::cout << GridLogMessage << "WilsonFermion5D Dhop ComputeTime/Calls : " <<DerivDhopComputeTime/DerivCalls<<" us" <<std::endl;
|
||||
|
||||
|
||||
|
||||
RealD mflops = 144*volume*DerivCalls/DerivDhopComputeTime;
|
||||
std::cout << GridLogMessage << "Average mflops/s per call : " << mflops << std::endl;
|
||||
std::cout << GridLogMessage << "Average mflops/s per call per node : " << mflops/NP << std::endl;
|
||||
|
||||
}
|
||||
|
||||
if (DerivCalls > 0 || DhopCalls > 0){
|
||||
@ -275,7 +268,7 @@ PARALLEL_FOR_LOOP
|
||||
for(int s=0;s<Ls;s++){
|
||||
int sU=ss;
|
||||
int sF = s+Ls*sU;
|
||||
Kernels::DiracOptDhopDir(Stencil,Umu,Stencil.comm_buf,sF,sU,in,out,dirdisp,gamma);
|
||||
Kernels::DiracOptDhopDir(Stencil,Umu,Stencil.CommBuf(),sF,sU,in,out,dirdisp,gamma);
|
||||
}
|
||||
}
|
||||
};
|
||||
@ -327,8 +320,7 @@ void WilsonFermion5D<Impl>::DerivInternal(StencilImpl & st,
|
||||
assert(sF < B._grid->oSites());
|
||||
assert(sU < U._grid->oSites());
|
||||
|
||||
Kernels::DiracOptDhopDir(st, U, st.comm_buf, sF, sU, B, Btilde, mu,
|
||||
gamma);
|
||||
Kernels::DiracOptDhopDir(st, U, st.CommBuf(), sF, sU, B, Btilde, mu, gamma);
|
||||
|
||||
////////////////////////////
|
||||
// spin trace outer product
|
||||
@ -396,7 +388,6 @@ void WilsonFermion5D<Impl>::DhopInternal(StencilImpl & st, LebesgueOrder &lo,
|
||||
DoubledGaugeField & U,
|
||||
const FermionField &in, FermionField &out,int dag)
|
||||
{
|
||||
DhopCalls++;
|
||||
// assert((dag==DaggerNo) ||(dag==DaggerYes));
|
||||
Compressor compressor(dag);
|
||||
|
||||
@ -413,8 +404,7 @@ void WilsonFermion5D<Impl>::DhopInternal(StencilImpl & st, LebesgueOrder &lo,
|
||||
for (int ss = 0; ss < U._grid->oSites(); ss++) {
|
||||
int sU = ss;
|
||||
int sF = LLs * sU;
|
||||
Kernels::DiracOptDhopSiteDag(st, lo, U, st.comm_buf, sF, sU, LLs, 1, in,
|
||||
out);
|
||||
Kernels::DiracOptDhopSiteDag(st, lo, U, st.CommBuf(), sF, sU, LLs, 1, in, out);
|
||||
}
|
||||
#ifdef AVX512
|
||||
} else if (stat.is_init() ) {
|
||||
@ -428,11 +418,10 @@ void WilsonFermion5D<Impl>::DhopInternal(StencilImpl & st, LebesgueOrder &lo,
|
||||
int mythread = omp_get_thread_num();
|
||||
stat.enter(mythread);
|
||||
#pragma omp for nowait
|
||||
for(int ss=0;ss<U._grid->oSites();ss++)
|
||||
{
|
||||
for(int ss=0;ss<U._grid->oSites();ss++) {
|
||||
int sU=ss;
|
||||
int sF=LLs*sU;
|
||||
Kernels::DiracOptDhopSite(st,lo,U,st.comm_buf,sF,sU,LLs,1,in,out);
|
||||
Kernels::DiracOptDhopSite(st,lo,U,st.CommBuf(),sF,sU,LLs,1,in,out);
|
||||
}
|
||||
stat.exit(mythread);
|
||||
}
|
||||
@ -443,8 +432,7 @@ void WilsonFermion5D<Impl>::DhopInternal(StencilImpl & st, LebesgueOrder &lo,
|
||||
for (int ss = 0; ss < U._grid->oSites(); ss++) {
|
||||
int sU = ss;
|
||||
int sF = LLs * sU;
|
||||
Kernels::DiracOptDhopSite(st, lo, U, st.comm_buf, sF, sU, LLs, 1, in,
|
||||
out);
|
||||
Kernels::DiracOptDhopSite(st,lo,U,st.CommBuf(),sF,sU,LLs,1,in,out);
|
||||
}
|
||||
}
|
||||
DhopComputeTime+=usecond();
|
||||
@ -454,6 +442,7 @@ void WilsonFermion5D<Impl>::DhopInternal(StencilImpl & st, LebesgueOrder &lo,
|
||||
template<class Impl>
|
||||
void WilsonFermion5D<Impl>::DhopOE(const FermionField &in, FermionField &out,int dag)
|
||||
{
|
||||
DhopCalls++;
|
||||
conformable(in._grid,FermionRedBlackGrid()); // verifies half grid
|
||||
conformable(in._grid,out._grid); // drops the cb check
|
||||
|
||||
@ -465,6 +454,7 @@ void WilsonFermion5D<Impl>::DhopOE(const FermionField &in, FermionField &out,int
|
||||
template<class Impl>
|
||||
void WilsonFermion5D<Impl>::DhopEO(const FermionField &in, FermionField &out,int dag)
|
||||
{
|
||||
DhopCalls++;
|
||||
conformable(in._grid,FermionRedBlackGrid()); // verifies half grid
|
||||
conformable(in._grid,out._grid); // drops the cb check
|
||||
|
||||
@ -476,6 +466,7 @@ void WilsonFermion5D<Impl>::DhopEO(const FermionField &in, FermionField &out,int
|
||||
template<class Impl>
|
||||
void WilsonFermion5D<Impl>::Dhop(const FermionField &in, FermionField &out,int dag)
|
||||
{
|
||||
DhopCalls+=2;
|
||||
conformable(in._grid,FermionGrid()); // verifies full grid
|
||||
conformable(in._grid,out._grid);
|
||||
|
||||
|
@ -34,9 +34,19 @@ Author: paboyle <paboyle@ph.ed.ac.uk>
|
||||
#include <Grid/Stat.h>
|
||||
|
||||
namespace Grid {
|
||||
|
||||
namespace QCD {
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
// This is the 4d red black case appropriate to support
|
||||
//
|
||||
// parity = (x+y+z+t)|2;
|
||||
// generalised five dim fermions like mobius, zolotarev etc..
|
||||
//
|
||||
// i.e. even even contains fifth dim hopping term.
|
||||
//
|
||||
// [DIFFERS from original CPS red black implementation parity = (x+y+z+t+s)|2 ]
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
// This is the 4d red black case appropriate to support
|
||||
//
|
||||
@ -185,7 +195,7 @@ namespace Grid {
|
||||
std::vector<SiteHalfSpinor,alignedAllocator<SiteHalfSpinor> > comm_buf;
|
||||
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
}}
|
||||
|
||||
#endif
|
||||
|
@ -43,9 +43,8 @@ WilsonKernels<Impl>::WilsonKernels(const ImplParams &p) : Base(p){};
|
||||
////////////////////////////////////////////
|
||||
|
||||
template <class Impl>
|
||||
void WilsonKernels<Impl>::DiracOptGenericDhopSiteDag(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf, int sF,
|
||||
void WilsonKernels<Impl>::DiracOptGenericDhopSiteDag(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
SiteHalfSpinor *buf, int sF,
|
||||
int sU, const FermionField &in, FermionField &out) {
|
||||
SiteHalfSpinor tmp;
|
||||
SiteHalfSpinor chi;
|
||||
@ -220,9 +219,8 @@ void WilsonKernels<Impl>::DiracOptGenericDhopSiteDag(
|
||||
|
||||
// Need controls to do interior, exterior, or both
|
||||
template <class Impl>
|
||||
void WilsonKernels<Impl>::DiracOptGenericDhopSite(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf, int sF,
|
||||
void WilsonKernels<Impl>::DiracOptGenericDhopSite(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
SiteHalfSpinor *buf, int sF,
|
||||
int sU, const FermionField &in, FermionField &out) {
|
||||
SiteHalfSpinor tmp;
|
||||
SiteHalfSpinor chi;
|
||||
@ -396,10 +394,9 @@ void WilsonKernels<Impl>::DiracOptGenericDhopSite(
|
||||
};
|
||||
|
||||
template <class Impl>
|
||||
void WilsonKernels<Impl>::DiracOptDhopDir(
|
||||
StencilImpl &st, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf, int sF,
|
||||
void WilsonKernels<Impl>::DiracOptDhopDir( StencilImpl &st, DoubledGaugeField &U,SiteHalfSpinor *buf, int sF,
|
||||
int sU, const FermionField &in, FermionField &out, int dir, int gamma) {
|
||||
|
||||
SiteHalfSpinor tmp;
|
||||
SiteHalfSpinor chi;
|
||||
SiteSpinor result;
|
||||
|
@ -32,7 +32,6 @@ directory
|
||||
#define GRID_QCD_DHOP_H
|
||||
|
||||
namespace Grid {
|
||||
|
||||
namespace QCD {
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
@ -56,16 +55,11 @@ namespace Grid {
|
||||
|
||||
template <bool EnableBool = true>
|
||||
typename std::enable_if<Impl::Dimension == 3 && Nc == 3 &&EnableBool, void>::type
|
||||
DiracOptDhopSite(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in,
|
||||
FermionField &out) {
|
||||
DiracOptDhopSite(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in, FermionField &out) {
|
||||
#ifdef AVX512
|
||||
if (AsmOpt) {
|
||||
WilsonKernels<Impl>::DiracOptAsmDhopSite(st, lo, U, buf, sF, sU, Ls, Ns,
|
||||
in, out);
|
||||
|
||||
WilsonKernels<Impl>::DiracOptAsmDhopSite(st,lo,U,buf,sF,sU,Ls,Ns,in,out);
|
||||
} else {
|
||||
#else
|
||||
{
|
||||
@ -73,11 +67,9 @@ namespace Grid {
|
||||
for (int site = 0; site < Ns; site++) {
|
||||
for (int s = 0; s < Ls; s++) {
|
||||
if (HandOpt)
|
||||
WilsonKernels<Impl>::DiracOptHandDhopSite(st, lo, U, buf, sF, sU,
|
||||
in, out);
|
||||
WilsonKernels<Impl>::DiracOptHandDhopSite(st,lo,U,buf,sF,sU,in,out);
|
||||
else
|
||||
WilsonKernels<Impl>::DiracOptGenericDhopSite(st, lo, U, buf, sF, sU,
|
||||
in, out);
|
||||
WilsonKernels<Impl>::DiracOptGenericDhopSite(st,lo,U,buf,sF,sU,in,out);
|
||||
sF++;
|
||||
}
|
||||
sU++;
|
||||
@ -87,15 +79,12 @@ namespace Grid {
|
||||
|
||||
template <bool EnableBool = true>
|
||||
typename std::enable_if<(Impl::Dimension != 3 || (Impl::Dimension == 3 && Nc != 3)) && EnableBool, void>::type
|
||||
DiracOptDhopSite(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in,
|
||||
FermionField &out) {
|
||||
DiracOptDhopSite(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in, FermionField &out) {
|
||||
|
||||
for (int site = 0; site < Ns; site++) {
|
||||
for (int s = 0; s < Ls; s++) {
|
||||
WilsonKernels<Impl>::DiracOptGenericDhopSite(st, lo, U, buf, sF, sU, in,
|
||||
out);
|
||||
WilsonKernels<Impl>::DiracOptGenericDhopSite(st, lo, U, buf, sF, sU, in, out);
|
||||
sF++;
|
||||
}
|
||||
sU++;
|
||||
@ -103,17 +92,12 @@ namespace Grid {
|
||||
}
|
||||
|
||||
template <bool EnableBool = true>
|
||||
typename std::enable_if<Impl::Dimension == 3 && Nc == 3 && EnableBool,
|
||||
void>::type
|
||||
DiracOptDhopSiteDag(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in,
|
||||
FermionField &out) {
|
||||
typename std::enable_if<Impl::Dimension == 3 && Nc == 3 && EnableBool,void>::type
|
||||
DiracOptDhopSiteDag(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in, FermionField &out) {
|
||||
#ifdef AVX512
|
||||
if (AsmOpt) {
|
||||
WilsonKernels<Impl>::DiracOptAsmDhopSiteDag(st, lo, U, buf, sF, sU, Ls,
|
||||
Ns, in, out);
|
||||
WilsonKernels<Impl>::DiracOptAsmDhopSiteDag(st,lo,U,buf,sF,sU,Ls,Ns,in,out);
|
||||
} else {
|
||||
#else
|
||||
{
|
||||
@ -121,11 +105,9 @@ namespace Grid {
|
||||
for (int site = 0; site < Ns; site++) {
|
||||
for (int s = 0; s < Ls; s++) {
|
||||
if (HandOpt)
|
||||
WilsonKernels<Impl>::DiracOptHandDhopSiteDag(st, lo, U, buf, sF, sU,
|
||||
in, out);
|
||||
WilsonKernels<Impl>::DiracOptHandDhopSiteDag(st,lo,U,buf,sF,sU,in,out);
|
||||
else
|
||||
WilsonKernels<Impl>::DiracOptGenericDhopSiteDag(st, lo, U, buf, sF,
|
||||
sU, in, out);
|
||||
WilsonKernels<Impl>::DiracOptGenericDhopSiteDag(st,lo,U,buf,sF,sU,in,out);
|
||||
sF++;
|
||||
}
|
||||
sU++;
|
||||
@ -134,73 +116,48 @@ namespace Grid {
|
||||
}
|
||||
|
||||
template <bool EnableBool = true>
|
||||
typename std::enable_if<
|
||||
(Impl::Dimension != 3 || (Impl::Dimension == 3 && Nc != 3)) && EnableBool,
|
||||
void>::type
|
||||
DiracOptDhopSiteDag(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in,
|
||||
FermionField &out) {
|
||||
typename std::enable_if<(Impl::Dimension != 3 || (Impl::Dimension == 3 && Nc != 3)) && EnableBool,void>::type
|
||||
DiracOptDhopSiteDag(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,SiteHalfSpinor * buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in, FermionField &out) {
|
||||
|
||||
for (int site = 0; site < Ns; site++) {
|
||||
for (int s = 0; s < Ls; s++) {
|
||||
WilsonKernels<Impl>::DiracOptGenericDhopSiteDag(st, lo, U, buf, sF, sU,
|
||||
in, out);
|
||||
WilsonKernels<Impl>::DiracOptGenericDhopSiteDag(st,lo,U,buf,sF,sU,in,out);
|
||||
sF++;
|
||||
}
|
||||
sU++;
|
||||
}
|
||||
}
|
||||
|
||||
void DiracOptDhopDir(
|
||||
StencilImpl &st, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
int sF, int sU, const FermionField &in, FermionField &out, int dirdisp,
|
||||
int gamma);
|
||||
void DiracOptDhopDir(StencilImpl &st, DoubledGaugeField &U,SiteHalfSpinor * buf,
|
||||
int sF, int sU, const FermionField &in, FermionField &out, int dirdisp, int gamma);
|
||||
|
||||
private:
|
||||
// Specialised variants
|
||||
void DiracOptGenericDhopSite(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
void DiracOptGenericDhopSite(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, const FermionField &in, FermionField &out);
|
||||
|
||||
void DiracOptGenericDhopSiteDag(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
void DiracOptGenericDhopSiteDag(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, const FermionField &in, FermionField &out);
|
||||
|
||||
void DiracOptAsmDhopSite(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in,
|
||||
FermionField &out);
|
||||
void DiracOptAsmDhopSite(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in,FermionField &out);
|
||||
|
||||
void DiracOptAsmDhopSiteDag(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in,
|
||||
FermionField &out);
|
||||
void DiracOptAsmDhopSiteDag(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, int Ls, int Ns, const FermionField &in, FermionField &out);
|
||||
|
||||
void DiracOptHandDhopSite(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
void DiracOptHandDhopSite(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, const FermionField &in, FermionField &out);
|
||||
|
||||
void DiracOptHandDhopSiteDag(
|
||||
StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
void DiracOptHandDhopSiteDag(StencilImpl &st, LebesgueOrder &lo, DoubledGaugeField &U, SiteHalfSpinor * buf,
|
||||
int sF, int sU, const FermionField &in, FermionField &out);
|
||||
|
||||
public:
|
||||
|
||||
WilsonKernels(const ImplParams &p = ImplParams());
|
||||
|
||||
};
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
}}
|
||||
|
||||
#endif
|
||||
|
@ -38,26 +38,22 @@ namespace Grid {
|
||||
///////////////////////////////////////////////////////////
|
||||
// Default to no assembler implementation
|
||||
///////////////////////////////////////////////////////////
|
||||
template<class Impl>
|
||||
void WilsonKernels<Impl >::DiracOptAsmDhopSite(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out)
|
||||
{
|
||||
assert(0);
|
||||
}
|
||||
template<class Impl>
|
||||
void WilsonKernels<Impl >::DiracOptAsmDhopSiteDag(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<class Impl> void
|
||||
WilsonKernels<Impl >::DiracOptAsmDhopSite(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,SiteHalfSpinor *buf,
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out)
|
||||
{
|
||||
assert(0);
|
||||
}
|
||||
|
||||
|
||||
template<class Impl> void
|
||||
WilsonKernels<Impl >::DiracOptAsmDhopSiteDag(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,SiteHalfSpinor *buf,
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out)
|
||||
{
|
||||
assert(0);
|
||||
}
|
||||
|
||||
#if defined(AVX512)
|
||||
|
||||
|
||||
///////////////////////////////////////////////////////////
|
||||
// If we are AVX512 specialise the single precision routine
|
||||
///////////////////////////////////////////////////////////
|
||||
@ -84,16 +80,14 @@ namespace Grid {
|
||||
#define FX(A) WILSONASM_ ##A
|
||||
|
||||
#undef KERNEL_DAG
|
||||
template<>
|
||||
void WilsonKernels<WilsonImplF>::DiracOptAsmDhopSite(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<> void
|
||||
WilsonKernels<WilsonImplF>::DiracOptAsmDhopSite(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U, SiteHalfSpinor *buf,
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out)
|
||||
#include <qcd/action/fermion/WilsonKernelsAsmBody.h>
|
||||
|
||||
#define KERNEL_DAG
|
||||
template<>
|
||||
void WilsonKernels<WilsonImplF>::DiracOptAsmDhopSiteDag(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<> void
|
||||
WilsonKernels<WilsonImplF>::DiracOptAsmDhopSiteDag(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,SiteHalfSpinor *buf,
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out)
|
||||
#include <qcd/action/fermion/WilsonKernelsAsmBody.h>
|
||||
|
||||
@ -109,31 +103,26 @@ namespace Grid {
|
||||
#define MULT_2SPIN(ptr,pf) MULT_ADDSUB_2SPIN_LS(ptr,pf)
|
||||
|
||||
#undef KERNEL_DAG
|
||||
template<>
|
||||
void WilsonKernels<DomainWallVec5dImplF>::DiracOptAsmDhopSite(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<> void
|
||||
WilsonKernels<DomainWallVec5dImplF>::DiracOptAsmDhopSite(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U, SiteHalfSpinor *buf,
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out)
|
||||
#include <qcd/action/fermion/WilsonKernelsAsmBody.h>
|
||||
|
||||
#define KERNEL_DAG
|
||||
template<>
|
||||
void WilsonKernels<DomainWallVec5dImplF>::DiracOptAsmDhopSiteDag(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<> void
|
||||
WilsonKernels<DomainWallVec5dImplF>::DiracOptAsmDhopSiteDag(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,SiteHalfSpinor *buf,
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out)
|
||||
#include <qcd/action/fermion/WilsonKernelsAsmBody.h>
|
||||
|
||||
#endif
|
||||
|
||||
|
||||
#define INSTANTIATE_ASM(A)\
|
||||
template void WilsonKernels<A>::DiracOptAsmDhopSite(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,\
|
||||
commVector<SiteHalfSpinor> &buf,\
|
||||
template void WilsonKernels<A>::DiracOptAsmDhopSite(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U, SiteHalfSpinor *buf,\
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out);\
|
||||
template void WilsonKernels<A>::DiracOptAsmDhopSiteDag(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U,\
|
||||
commVector<SiteHalfSpinor> &buf,\
|
||||
\
|
||||
template void WilsonKernels<A>::DiracOptAsmDhopSiteDag(StencilImpl &st,LebesgueOrder & lo,DoubledGaugeField &U, SiteHalfSpinor *buf,\
|
||||
int ss,int ssU,int Ls,int Ns,const FermionField &in, FermionField &out);\
|
||||
|
||||
|
||||
INSTANTIATE_ASM(WilsonImplF);
|
||||
INSTANTIATE_ASM(WilsonImplD);
|
||||
INSTANTIATE_ASM(ZWilsonImplF);
|
||||
@ -144,6 +133,6 @@ INSTANTIATE_ASM(DomainWallVec5dImplF);
|
||||
INSTANTIATE_ASM(DomainWallVec5dImplD);
|
||||
INSTANTIATE_ASM(ZDomainWallVec5dImplF);
|
||||
INSTANTIATE_ASM(ZDomainWallVec5dImplD);
|
||||
}
|
||||
}
|
||||
|
||||
}}
|
||||
|
||||
|
@ -311,9 +311,8 @@ namespace Grid {
|
||||
namespace QCD {
|
||||
|
||||
|
||||
template<class Impl>
|
||||
void WilsonKernels<Impl>::DiracOptHandDhopSite(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<class Impl> void
|
||||
WilsonKernels<Impl>::DiracOptHandDhopSite(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,SiteHalfSpinor *buf,
|
||||
int ss,int sU,const FermionField &in, FermionField &out)
|
||||
{
|
||||
typedef typename Simd::scalar_type S;
|
||||
@ -555,8 +554,7 @@ namespace QCD {
|
||||
}
|
||||
|
||||
template<class Impl>
|
||||
void WilsonKernels<Impl>::DiracOptHandDhopSiteDag(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
void WilsonKernels<Impl>::DiracOptHandDhopSiteDag(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,SiteHalfSpinor *buf,
|
||||
int ss,int sU,const FermionField &in, FermionField &out)
|
||||
{
|
||||
// std::cout << "Hand op Dhop "<<std::endl;
|
||||
@ -798,37 +796,34 @@ namespace QCD {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
////////////////////////////////////////////////
|
||||
// Specialise Gparity to simple implementation
|
||||
////////////////////////////////////////////////
|
||||
template<>
|
||||
void WilsonKernels<GparityWilsonImplF>::DiracOptHandDhopSite(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<> void
|
||||
WilsonKernels<GparityWilsonImplF>::DiracOptHandDhopSite(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,
|
||||
SiteHalfSpinor *buf,
|
||||
int sF,int sU,const FermionField &in, FermionField &out)
|
||||
{
|
||||
assert(0);
|
||||
}
|
||||
|
||||
template<>
|
||||
void WilsonKernels<GparityWilsonImplF>::DiracOptHandDhopSiteDag(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<> void
|
||||
WilsonKernels<GparityWilsonImplF>::DiracOptHandDhopSiteDag(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,
|
||||
SiteHalfSpinor *buf,
|
||||
int sF,int sU,const FermionField &in, FermionField &out)
|
||||
{
|
||||
assert(0);
|
||||
}
|
||||
|
||||
template<>
|
||||
void WilsonKernels<GparityWilsonImplD>::DiracOptHandDhopSite(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<> void
|
||||
WilsonKernels<GparityWilsonImplD>::DiracOptHandDhopSite(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,SiteHalfSpinor *buf,
|
||||
int sF,int sU,const FermionField &in, FermionField &out)
|
||||
{
|
||||
assert(0);
|
||||
}
|
||||
|
||||
template<>
|
||||
void WilsonKernels<GparityWilsonImplD>::DiracOptHandDhopSiteDag(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,
|
||||
commVector<SiteHalfSpinor> &buf,
|
||||
template<> void
|
||||
WilsonKernels<GparityWilsonImplD>::DiracOptHandDhopSiteDag(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,SiteHalfSpinor *buf,
|
||||
int sF,int sU,const FermionField &in, FermionField &out)
|
||||
{
|
||||
assert(0);
|
||||
@ -840,11 +835,9 @@ void WilsonKernels<GparityWilsonImplD>::DiracOptHandDhopSiteDag(StencilImpl &st,
|
||||
// Need Nc=3 though //
|
||||
|
||||
#define INSTANTIATE_THEM(A) \
|
||||
template void WilsonKernels<A>::DiracOptHandDhopSite(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,\
|
||||
commVector<SiteHalfSpinor> &buf,\
|
||||
template void WilsonKernels<A>::DiracOptHandDhopSite(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,SiteHalfSpinor *buf,\
|
||||
int ss,int sU,const FermionField &in, FermionField &out); \
|
||||
template void WilsonKernels<A>::DiracOptHandDhopSiteDag(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,\
|
||||
commVector<SiteHalfSpinor> &buf,\
|
||||
template void WilsonKernels<A>::DiracOptHandDhopSiteDag(StencilImpl &st,LebesgueOrder &lo,DoubledGaugeField &U,SiteHalfSpinor *buf,\
|
||||
int ss,int sU,const FermionField &in, FermionField &out);
|
||||
|
||||
INSTANTIATE_THEM(WilsonImplF);
|
||||
|
@ -151,12 +151,19 @@ namespace QCD{
|
||||
{
|
||||
auto *grid = dynamic_cast<GridCartesian *>(out._grid);
|
||||
const unsigned int nd = grid->_ndimension;
|
||||
std::vector<int> latt_size = grid->_fdimensions;
|
||||
GaugeLinkField sqrtK2Inv(grid), r(grid);
|
||||
GaugeField aTilde(grid);
|
||||
FFT fft(grid);
|
||||
|
||||
Integer vol = 1;
|
||||
for(int d = 0; d < nd; d++)
|
||||
{
|
||||
vol = vol * latt_size[d];
|
||||
}
|
||||
|
||||
invKHatSquared(sqrtK2Inv);
|
||||
sqrtK2Inv = sqrt(real(sqrtK2Inv));
|
||||
sqrtK2Inv = sqrt(vol*real(sqrtK2Inv));
|
||||
zmSub(sqrtK2Inv);
|
||||
for(int mu = 0; mu < nd; mu++)
|
||||
{
|
||||
|
@ -674,6 +674,37 @@ class SU {
|
||||
out += la;
|
||||
}
|
||||
}
|
||||
/*
|
||||
add GaugeTrans
|
||||
*/
|
||||
|
||||
template<typename GaugeField,typename GaugeMat>
|
||||
static void GaugeTransform( GaugeField &Umu, GaugeMat &g){
|
||||
GridBase *grid = Umu._grid;
|
||||
conformable(grid,g._grid);
|
||||
|
||||
GaugeMat U(grid);
|
||||
GaugeMat ag(grid); ag = adj(g);
|
||||
|
||||
for(int mu=0;mu<Nd;mu++){
|
||||
U= PeekIndex<LorentzIndex>(Umu,mu);
|
||||
U = g*U*Cshift(ag, mu, 1);
|
||||
PokeIndex<LorentzIndex>(Umu,U,mu);
|
||||
}
|
||||
}
|
||||
template<typename GaugeMat>
|
||||
static void GaugeTransform( std::vector<GaugeMat> &U, GaugeMat &g){
|
||||
GridBase *grid = g._grid;
|
||||
GaugeMat ag(grid); ag = adj(g);
|
||||
for(int mu=0;mu<Nd;mu++){
|
||||
U[mu] = g*U[mu]*Cshift(ag, mu, 1);
|
||||
}
|
||||
}
|
||||
template<typename GaugeField,typename GaugeMat>
|
||||
static void RandomGaugeTransform(GridParallelRNG &pRNG, GaugeField &Umu, GaugeMat &g){
|
||||
LieRandomize(pRNG,g,1.0);
|
||||
GaugeTransform(Umu,g);
|
||||
}
|
||||
|
||||
// Projects the algebra components a lattice matrix (of dimension ncol*ncol -1 )
|
||||
// inverse operation: FundamentalLieAlgebraMatrix
|
||||
|
@ -42,20 +42,14 @@ Author: paboyle <paboyle@ph.ed.ac.uk>
|
||||
namespace Grid{
|
||||
namespace Optimization {
|
||||
|
||||
template<class vtype>
|
||||
union uconv {
|
||||
__m512 f;
|
||||
vtype v;
|
||||
};
|
||||
|
||||
union u512f {
|
||||
__m512 v;
|
||||
float f[8];
|
||||
float f[16];
|
||||
};
|
||||
|
||||
union u512d {
|
||||
__m512 v;
|
||||
double f[4];
|
||||
__m512d v;
|
||||
double f[8];
|
||||
};
|
||||
|
||||
struct Vsplat{
|
||||
|
@ -116,7 +116,7 @@ int main (int argc, char ** argv)
|
||||
else if (SE->_is_local)
|
||||
Check._odata[i] = Foo._odata[SE->_offset];
|
||||
else
|
||||
Check._odata[i] = myStencil.comm_buf[SE->_offset];
|
||||
Check._odata[i] = myStencil.CommBuf()[SE->_offset];
|
||||
}
|
||||
|
||||
Real nrmC = norm2(Check);
|
||||
@ -207,7 +207,7 @@ int main (int argc, char ** argv)
|
||||
else if (SE->_is_local)
|
||||
OCheck._odata[i] = EFoo._odata[SE->_offset];
|
||||
else
|
||||
OCheck._odata[i] = EStencil.comm_buf[SE->_offset];
|
||||
OCheck._odata[i] = EStencil.CommBuf()[SE->_offset];
|
||||
}
|
||||
for(int i=0;i<ECheck._grid->oSites();i++){
|
||||
int permute_type;
|
||||
@ -220,7 +220,7 @@ int main (int argc, char ** argv)
|
||||
else if (SE->_is_local)
|
||||
ECheck._odata[i] = OFoo._odata[SE->_offset];
|
||||
else
|
||||
ECheck._odata[i] = OStencil.comm_buf[SE->_offset];
|
||||
ECheck._odata[i] = OStencil.CommBuf()[SE->_offset];
|
||||
}
|
||||
|
||||
setCheckerboard(Check,ECheck);
|
||||
|
@ -86,11 +86,12 @@ int main (int argc, char ** argv)
|
||||
|
||||
FFT theFFT(&GRID);
|
||||
|
||||
Ctilde=C;
|
||||
std::cout<<" Benchmarking FFT of LatticeComplex "<<std::endl;
|
||||
theFFT.FFT_dim(Ctilde,C,0,FFT::forward); C=Ctilde; std::cout << theFFT.MFlops()<<" Mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Ctilde,C,1,FFT::forward); C=Ctilde; std::cout << theFFT.MFlops()<<" Mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Ctilde,C,2,FFT::forward); C=Ctilde; std::cout << theFFT.MFlops()<<" Mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Ctilde,C,3,FFT::forward); std::cout << theFFT.MFlops()<<" Mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Ctilde,Ctilde,0,FFT::forward); std::cout << theFFT.MFlops()<<" Mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Ctilde,Ctilde,1,FFT::forward); std::cout << theFFT.MFlops()<<" Mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Ctilde,Ctilde,2,FFT::forward); std::cout << theFFT.MFlops()<<" Mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Ctilde,Ctilde,3,FFT::forward); std::cout << theFFT.MFlops()<<" Mflops "<<std::endl;
|
||||
|
||||
// C=zero;
|
||||
// Ctilde = where(abs(Ctilde)<1.0e-10,C,Ctilde);
|
||||
@ -113,10 +114,11 @@ int main (int argc, char ** argv)
|
||||
Cref= Cref - C;
|
||||
std::cout << " invertible check " << norm2(Cref)<<std::endl;
|
||||
|
||||
Stilde=S;
|
||||
std::cout<<" Benchmarking FFT of LatticeSpinMatrix "<<std::endl;
|
||||
theFFT.FFT_dim(Stilde,S,0,FFT::forward); S=Stilde;std::cout << theFFT.MFlops()<<" mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Stilde,S,1,FFT::forward); S=Stilde;std::cout << theFFT.MFlops()<<" mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Stilde,S,2,FFT::forward); S=Stilde;std::cout << theFFT.MFlops()<<" mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Stilde,S,0,FFT::forward); std::cout << theFFT.MFlops()<<" mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Stilde,S,1,FFT::forward); std::cout << theFFT.MFlops()<<" mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Stilde,S,2,FFT::forward); std::cout << theFFT.MFlops()<<" mflops "<<std::endl;
|
||||
theFFT.FFT_dim(Stilde,S,3,FFT::forward); std::cout << theFFT.MFlops()<<" mflops "<<std::endl;
|
||||
|
||||
SpinMatrixD Sp;
|
||||
@ -441,6 +443,8 @@ int main (int argc, char ** argv)
|
||||
}
|
||||
|
||||
{
|
||||
/*
|
||||
*
|
||||
typedef GaugeImplTypes<vComplexD, 1> QEDGimplTypesD;
|
||||
typedef Photon<QEDGimplTypesD> QEDGaction;
|
||||
|
||||
@ -450,6 +454,7 @@ int main (int argc, char ** argv)
|
||||
|
||||
Maxwell.FreePropagator (Source,Prop);
|
||||
std::cout << " MaxwellFree propagator\n";
|
||||
*/
|
||||
}
|
||||
Grid_finalize();
|
||||
}
|
||||
|
301
tests/core/Test_fft_gfix.cc
Normal file
301
tests/core/Test_fft_gfix.cc
Normal file
@ -0,0 +1,301 @@
|
||||
/*************************************************************************************
|
||||
|
||||
grid` physics library, www.github.com/paboyle/Grid
|
||||
|
||||
Source file: ./tests/Test_cshift.cc
|
||||
|
||||
Copyright (C) 2015
|
||||
|
||||
Author: Azusa Yamaguchi <ayamaguc@staffmail.ed.ac.uk>
|
||||
Author: Peter Boyle <paboyle@ph.ed.ac.uk>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation; either version 2 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License along
|
||||
with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
See the full license in the file "LICENSE" in the top level distribution directory
|
||||
*************************************************************************************/
|
||||
/* END LEGAL */
|
||||
#include <Grid/Grid.h>
|
||||
#include <Grid/qcd/action/gauge/Photon.h>
|
||||
|
||||
using namespace Grid;
|
||||
using namespace Grid::QCD;
|
||||
|
||||
template <class Gimpl>
|
||||
class FourierAcceleratedGaugeFixer : public Gimpl {
|
||||
public:
|
||||
INHERIT_GIMPL_TYPES(Gimpl);
|
||||
|
||||
typedef typename Gimpl::GaugeLinkField GaugeMat;
|
||||
typedef typename Gimpl::GaugeField GaugeLorentz;
|
||||
|
||||
static void GaugeLinkToLieAlgebraField(const std::vector<GaugeMat> &U,std::vector<GaugeMat> &A) {
|
||||
for(int mu=0;mu<Nd;mu++){
|
||||
// ImplComplex cmi(0.0,-1.0);
|
||||
ComplexD cmi(0.0,-1.0);
|
||||
A[mu] = Ta(U[mu]) * cmi;
|
||||
}
|
||||
}
|
||||
static void DmuAmu(const std::vector<GaugeMat> &A,GaugeMat &dmuAmu) {
|
||||
dmuAmu=zero;
|
||||
for(int mu=0;mu<Nd;mu++){
|
||||
dmuAmu = dmuAmu + A[mu] - Cshift(A[mu],mu,-1);
|
||||
}
|
||||
}
|
||||
static void SteepestDescentGaugeFix(GaugeLorentz &Umu,RealD & alpha,int maxiter,RealD Omega_tol, RealD Phi_tol) {
|
||||
GridBase *grid = Umu._grid;
|
||||
|
||||
RealD org_plaq =WilsonLoops<Gimpl>::avgPlaquette(Umu);
|
||||
RealD org_link_trace=WilsonLoops<Gimpl>::linkTrace(Umu);
|
||||
RealD old_trace = org_link_trace;
|
||||
RealD trG;
|
||||
|
||||
std::vector<GaugeMat> U(Nd,grid);
|
||||
GaugeMat dmuAmu(grid);
|
||||
|
||||
for(int i=0;i<maxiter;i++){
|
||||
for(int mu=0;mu<Nd;mu++) U[mu]= PeekIndex<LorentzIndex>(Umu,mu);
|
||||
//trG = SteepestDescentStep(U,alpha,dmuAmu);
|
||||
trG = FourierAccelSteepestDescentStep(U,alpha,dmuAmu);
|
||||
for(int mu=0;mu<Nd;mu++) PokeIndex<LorentzIndex>(Umu,U[mu],mu);
|
||||
// Monitor progress and convergence test
|
||||
// infrequently to minimise cost overhead
|
||||
if ( i %20 == 0 ) {
|
||||
RealD plaq =WilsonLoops<Gimpl>::avgPlaquette(Umu);
|
||||
RealD link_trace=WilsonLoops<Gimpl>::linkTrace(Umu);
|
||||
|
||||
std::cout << GridLogMessage << " Iteration "<<i<< " plaq= "<<plaq<< " dmuAmu " << norm2(dmuAmu)<< std::endl;
|
||||
|
||||
RealD Phi = 1.0 - old_trace / link_trace ;
|
||||
RealD Omega= 1.0 - trG;
|
||||
|
||||
|
||||
std::cout << GridLogMessage << " Iteration "<<i<< " Phi= "<<Phi<< " Omega= " << Omega<< " trG " << trG <<std::endl;
|
||||
if ( (Omega < Omega_tol) && ( ::fabs(Phi) < Phi_tol) ) {
|
||||
std::cout << GridLogMessage << "Converged ! "<<std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
old_trace = link_trace;
|
||||
|
||||
}
|
||||
}
|
||||
};
|
||||
static RealD SteepestDescentStep(std::vector<GaugeMat> &U,RealD & alpha, GaugeMat & dmuAmu) {
|
||||
GridBase *grid = U[0]._grid;
|
||||
|
||||
std::vector<GaugeMat> A(Nd,grid);
|
||||
GaugeMat g(grid);
|
||||
|
||||
GaugeLinkToLieAlgebraField(U,A);
|
||||
ExpiAlphaDmuAmu(A,g,alpha,dmuAmu);
|
||||
|
||||
|
||||
RealD vol = grid->gSites();
|
||||
RealD trG = TensorRemove(sum(trace(g))).real()/vol/Nc;
|
||||
|
||||
SU<Nc>::GaugeTransform(U,g);
|
||||
|
||||
return trG;
|
||||
}
|
||||
|
||||
static RealD FourierAccelSteepestDescentStep(std::vector<GaugeMat> &U,RealD & alpha, GaugeMat & dmuAmu) {
|
||||
|
||||
GridBase *grid = U[0]._grid;
|
||||
|
||||
RealD vol = grid->gSites();
|
||||
|
||||
FFT theFFT((GridCartesian *)grid);
|
||||
|
||||
LatticeComplex Fp(grid);
|
||||
LatticeComplex psq(grid); psq=zero;
|
||||
LatticeComplex pmu(grid);
|
||||
LatticeComplex one(grid); one = ComplexD(1.0,0.0);
|
||||
|
||||
GaugeMat g(grid);
|
||||
GaugeMat dmuAmu_p(grid);
|
||||
std::vector<GaugeMat> A(Nd,grid);
|
||||
|
||||
GaugeLinkToLieAlgebraField(U,A);
|
||||
|
||||
DmuAmu(A,dmuAmu);
|
||||
|
||||
theFFT.FFT_all_dim(dmuAmu_p,dmuAmu,FFT::forward);
|
||||
|
||||
//////////////////////////////////
|
||||
// Work out Fp = psq_max/ psq...
|
||||
//////////////////////////////////
|
||||
std::vector<int> latt_size = grid->GlobalDimensions();
|
||||
std::vector<int> coor(grid->_ndimension,0);
|
||||
for(int mu=0;mu<Nd;mu++) {
|
||||
|
||||
RealD TwoPiL = M_PI * 2.0/ latt_size[mu];
|
||||
LatticeCoordinate(pmu,mu);
|
||||
pmu = TwoPiL * pmu ;
|
||||
psq = psq + 4.0*sin(pmu*0.5)*sin(pmu*0.5);
|
||||
}
|
||||
|
||||
ComplexD psqMax(16.0);
|
||||
Fp = psqMax*one/psq;
|
||||
|
||||
static int once;
|
||||
if ( once == 0 ) {
|
||||
std::cout << " Fp " << Fp <<std::endl;
|
||||
once ++;
|
||||
}
|
||||
pokeSite(TComplex(1.0),Fp,coor);
|
||||
|
||||
dmuAmu_p = dmuAmu_p * Fp;
|
||||
|
||||
theFFT.FFT_all_dim(dmuAmu,dmuAmu_p,FFT::backward);
|
||||
|
||||
GaugeMat ciadmam(grid);
|
||||
ComplexD cialpha(0.0,-alpha);
|
||||
ciadmam = dmuAmu*cialpha;
|
||||
SU<Nc>::taExp(ciadmam,g);
|
||||
|
||||
RealD trG = TensorRemove(sum(trace(g))).real()/vol/Nc;
|
||||
|
||||
SU<Nc>::GaugeTransform(U,g);
|
||||
|
||||
return trG;
|
||||
}
|
||||
|
||||
static void ExpiAlphaDmuAmu(const std::vector<GaugeMat> &A,GaugeMat &g,RealD & alpha, GaugeMat &dmuAmu) {
|
||||
GridBase *grid = g._grid;
|
||||
ComplexD cialpha(0.0,-alpha);
|
||||
GaugeMat ciadmam(grid);
|
||||
DmuAmu(A,dmuAmu);
|
||||
ciadmam = dmuAmu*cialpha;
|
||||
SU<Nc>::taExp(ciadmam,g);
|
||||
}
|
||||
/*
|
||||
////////////////////////////////////////////////////////////////
|
||||
// NB The FT for fields living on links has an extra phase in it
|
||||
// Could add these to the FFT class as a later task since this code
|
||||
// might be reused elsewhere ????
|
||||
////////////////////////////////////////////////////////////////
|
||||
static void InverseFourierTransformAmu(FFT &theFFT,const std::vector<GaugeMat> &Ap,std::vector<GaugeMat> &Ax) {
|
||||
GridBase * grid = theFFT.Grid();
|
||||
std::vector<int> latt_size = grid->GlobalDimensions();
|
||||
|
||||
ComplexField pmu(grid);
|
||||
ComplexField pha(grid);
|
||||
GaugeMat Apha(grid);
|
||||
|
||||
ComplexD ci(0.0,1.0);
|
||||
|
||||
for(int mu=0;mu<Nd;mu++){
|
||||
|
||||
RealD TwoPiL = M_PI * 2.0/ latt_size[mu];
|
||||
LatticeCoordinate(pmu,mu);
|
||||
pmu = TwoPiL * pmu ;
|
||||
pha = exp(pmu * (0.5 *ci)); // e(ipmu/2) since Amu(x+mu/2)
|
||||
|
||||
Apha = Ap[mu] * pha;
|
||||
|
||||
theFFT.FFT_all_dim(Apha,Ax[mu],FFT::backward);
|
||||
}
|
||||
}
|
||||
static void FourierTransformAmu(FFT & theFFT,const std::vector<GaugeMat> &Ax,std::vector<GaugeMat> &Ap) {
|
||||
GridBase * grid = theFFT.Grid();
|
||||
std::vector<int> latt_size = grid->GlobalDimensions();
|
||||
|
||||
ComplexField pmu(grid);
|
||||
ComplexField pha(grid);
|
||||
ComplexD ci(0.0,1.0);
|
||||
|
||||
// Sign convention for FFTW calls:
|
||||
// A(x)= Sum_p e^ipx A(p) / V
|
||||
// A(p)= Sum_p e^-ipx A(x)
|
||||
|
||||
for(int mu=0;mu<Nd;mu++){
|
||||
RealD TwoPiL = M_PI * 2.0/ latt_size[mu];
|
||||
LatticeCoordinate(pmu,mu);
|
||||
pmu = TwoPiL * pmu ;
|
||||
pha = exp(-pmu * (0.5 *ci)); // e(+ipmu/2) since Amu(x+mu/2)
|
||||
|
||||
theFFT.FFT_all_dim(Ax[mu],Ap[mu],FFT::backward);
|
||||
Ap[mu] = Ap[mu] * pha;
|
||||
}
|
||||
}
|
||||
*/
|
||||
};
|
||||
|
||||
int main (int argc, char ** argv)
|
||||
{
|
||||
std::vector<int> seeds({1,2,3,4});
|
||||
|
||||
Grid_init(&argc,&argv);
|
||||
|
||||
int threads = GridThread::GetThreads();
|
||||
|
||||
std::vector<int> latt_size = GridDefaultLatt();
|
||||
std::vector<int> simd_layout( { vComplexD::Nsimd(),1,1,1});
|
||||
std::vector<int> mpi_layout = GridDefaultMpi();
|
||||
|
||||
int vol = 1;
|
||||
for(int d=0;d<latt_size.size();d++){
|
||||
vol = vol * latt_size[d];
|
||||
}
|
||||
|
||||
GridCartesian GRID(latt_size,simd_layout,mpi_layout);
|
||||
GridSerialRNG sRNG; sRNG.SeedFixedIntegers(seeds); // naughty seeding
|
||||
GridParallelRNG pRNG(&GRID); pRNG.SeedFixedIntegers(seeds);
|
||||
|
||||
FFT theFFT(&GRID);
|
||||
|
||||
std::cout<<GridLogMessage << "Grid is setup to use "<<threads<<" threads"<<std::endl;
|
||||
|
||||
std::cout<< "*****************************************************************" <<std::endl;
|
||||
std::cout<< "* Testing we can gauge fix steep descent a RGT of Unit gauge *" <<std::endl;
|
||||
std::cout<< "*****************************************************************" <<std::endl;
|
||||
|
||||
LatticeGaugeFieldD Umu(&GRID);
|
||||
LatticeGaugeFieldD Uorg(&GRID);
|
||||
LatticeColourMatrixD g(&GRID); // Gauge xform
|
||||
|
||||
|
||||
SU3::ColdConfiguration(pRNG,Umu); // Unit gauge
|
||||
Uorg=Umu;
|
||||
|
||||
SU3::RandomGaugeTransform(pRNG,Umu,g); // Unit gauge
|
||||
RealD plaq=WilsonLoops<PeriodicGimplD>::avgPlaquette(Umu);
|
||||
std::cout << " Initial plaquette "<<plaq << std::endl;
|
||||
|
||||
|
||||
|
||||
RealD alpha=0.1;
|
||||
FourierAcceleratedGaugeFixer<PeriodicGimplD>::SteepestDescentGaugeFix(Umu,alpha,10000,1.0e-10, 1.0e-10);
|
||||
|
||||
|
||||
plaq=WilsonLoops<PeriodicGimplD>::avgPlaquette(Umu);
|
||||
std::cout << " Final plaquette "<<plaq << std::endl;
|
||||
|
||||
Uorg = Uorg - Umu;
|
||||
std::cout << " Norm Difference "<< norm2(Uorg) << std::endl;
|
||||
|
||||
|
||||
// std::cout<< "*****************************************************************" <<std::endl;
|
||||
// std::cout<< "* Testing Fourier accelerated fixing *" <<std::endl;
|
||||
// std::cout<< "*****************************************************************" <<std::endl;
|
||||
|
||||
// std::cout<< "*****************************************************************" <<std::endl;
|
||||
// std::cout<< "* Testing non-unit configuration *" <<std::endl;
|
||||
// std::cout<< "*****************************************************************" <<std::endl;
|
||||
|
||||
|
||||
|
||||
Grid_finalize();
|
||||
}
|
Loading…
Reference in New Issue
Block a user