1
0
mirror of https://github.com/paboyle/Grid.git synced 2024-09-20 09:15:38 +01:00

Threading support rework.

Placed parallel pragmas as macros; implemented deterministic thread reduction in style of
BFM.
This commit is contained in:
Peter Boyle 2015-05-12 07:51:41 +01:00
parent b1d2c60d07
commit 6103c29ee3
26 changed files with 276 additions and 184 deletions

93
TODO
View File

@ -1,33 +1,35 @@
================================================================
*** Hacks and bug fixes to clean up and Audits
================================================================
* Base class to share common code between vRealF, VComplexF etc...
* Performance check on Guido's reimplementation strategy
* Bug in SeedFixedIntegers gives same output on each site. -- Think I fixed but NOT checked for sure
* FIXME audit
* const audit
* Replace vset with a call to merge.;
* care in Gmerge,Gextract over vset .
* extract / merge extra implementation removal
* Strong test for norm2, conj and all primitive types. -- tests/Grid_simd.cc is almost there
================================================================
*** New Functionality
================================================================
* Implement where to take template scheme.
* - BinaryWriter, TextWriter etc... * - BinaryWriter, TextWriter etc...
- use protocol buffers? replace xmlReader/Writer ec.. - use protocol buffers? replace xmlReader/Writer ec..
- Binary use htonll, htonl - Binary use htonll, htonl
*** Hacks and bug fixes to clean up
* Had to hack assignment to 1.0 in the tests/Grid_gamma test
* norm2l is a hack. figure out syntax error and make this norm2 c.f. tests/Grid_gamma.cc
* Reduce implemention is poor ; need threaded reductions; OMP isn't able to do it for generic objects.
* Bug in SeedFixedIntegers gives same output on each site.
* Bug in RNG with complex numbers ; only filling real values; need helper function -- DONE
* Conformable test in Cshift routines.
*** Functionality
* Implement where to take template scheme.
* Command line args for geometry, simd, etc. layout. Is it necessary to have
user pass these? Is this a QCD specific?
* Strong test for norm2, conj and all primitive types. -- Grid_simd test is almost there
* Expression template engine: * Expression template engine:
- Audit -- Audit
- Introduce base clase for Grid Tensors. -- Norm2(expression) problem: introduce norm2 unary op, or Introduce conversion automatic from expression to Lattice<vobj>
- Introduce norm2 unary op.
- Introduce conversion automatic from expression to Lattice<vobj>
* CovariantShift support -----Use a class to store gauge field? (parallel transport?) * CovariantShift support -----Use a class to store gauge field? (parallel transport?)
@ -48,12 +50,10 @@
- I have collated into single location at least. - I have collated into single location at least.
- Need to use _mm_*insert/extract routines. - Need to use _mm_*insert/extract routines.
* Flavour matrices? * Flavour matrices?
* Pauli, SU subgroup, etc.. * Pauli, SU subgroup, etc..
* su3 exponentiation & log etc.. [Jamie's code?] * su3 exponentiation & log etc.. [Jamie's code?]
* TaProj * TaProj
* FFTnD ? * FFTnD ?
* Parallel MPI2 IO * Parallel MPI2 IO
@ -62,20 +62,23 @@
* rb4d support for 5th dimension in Mobius. * rb4d support for 5th dimension in Mobius.
* Check for missing functionality - partially audited against QDP++ layout * Check for missing functionality - partially audited against QDP++ layout
// Base class to share common code between vRealF, VComplexF etc...
//
// Unary functions // Unary functions
// cos,sin, tan, acos, asin, cosh, acosh, tanh, sinh, // Scalar<vReal> only arg // cos,sin, tan, acos, asin, cosh, acosh, tanh, sinh, // Scalar<vReal> only arg
// exp, log, sqrt, fabs // exp, log, sqrt, fabs
//
// transposeColor, transposeSpin, // transposeColor, transposeSpin,
// adjColor, adjSpin, // adjColor, adjSpin,
//
// copyMask. // copyMask.
//
// localMaxAbs // localMaxAbs
//
// Fourier transform equivalent. // Fourier transform equivalent.
Actions
* Fermion
- Wilson
- Clover
- DomainWall
- Mobius
- z-Mobius
* Gauge
- Wilson, symanzik, iwasaki
Algorithms Algorithms
* LinearOperator * LinearOperator
@ -83,23 +86,21 @@ Algorithms
* Polynomial * Polynomial
* Eigen * Eigen
* Pcg * Pcg
* Adef2
* DeflCG
* fPcg * fPcg
* MCR * MCR
* HDCG * HDCG
* HMC, Heatbath
* etc.. * etc..
AUDITS: ======================================================================================================
FUNCTIONALITY: it pleases me to keep track of things I have done (keeps sane)
* FIXME audit
* const audit
* Replace vset with a call to merge.;
* care in Gmerge,Gextract over vset .
* extract / merge extra implementation removal
====================================================================================================== ======================================================================================================
FUNCTIONALITY: * Command line args for geometry, simd, etc. layout. Is it necessary to have -- DONE
user pass these? Is this a QCD specific?
* Stencil -- DONE * Stencil -- DONE
* Test infrastructure -- DONE * Test infrastructure -- DONE
* Fourspin, two spin project --- DONE * Fourspin, two spin project --- DONE
@ -115,6 +116,7 @@ FUNCTIONALITY:
* How to do U[mu] ... lorentz part of type structure or not. more like chroma if not. -- DONE * How to do U[mu] ... lorentz part of type structure or not. more like chroma if not. -- DONE
* Twospin/Fourspin/Gamma/Proj/Recon ----- DONE * Twospin/Fourspin/Gamma/Proj/Recon ----- DONE
* norm2l is a hack. figure out syntax error and make this norm2 c.f. tests/Grid_gamma.cc -- DONE
* subdirs lib, tests ?? ----- DONE * subdirs lib, tests ?? ----- DONE
- lib/math - lib/math
@ -149,3 +151,10 @@ FUNCTIONALITY:
* Controling std::cout ------- DONE * Controling std::cout ------- DONE
* Had to hack assignment to 1.0 in the tests/Grid_gamma test -- DONE
* Reduce implemention is poor ; need threaded reductions; OMP isn't able to do it for generic objects. -- DONE
* Bug in RNG with complex numbers ; only filling real values; need helper function -- DONE
* Conformable test in Cshift routines. -- none needed ; there is only one
* Conformable testing in expression templates -- DONE (recursive)

View File

@ -53,7 +53,6 @@
#include <Grid_lattice.h> #include <Grid_lattice.h>
#include <Grid_comparison.h> #include <Grid_comparison.h>
#include <Grid_cshift.h> #include <Grid_cshift.h>
#include <Grid_where.h>
#include <Grid_stencil.h> #include <Grid_stencil.h>
#include <qcd/Grid_qcd.h> #include <qcd/Grid_qcd.h>
#include <parallelIO/GridNerscIO.h> #include <parallelIO/GridNerscIO.h>

View File

@ -142,5 +142,6 @@ namespace Grid {
} }
#include <lattice/Grid_lattice_comparison.h> #include <lattice/Grid_lattice_comparison.h>
#include <lattice/Grid_lattice_where.h>
#endif #endif

View File

@ -68,10 +68,14 @@ namespace Grid {
inline ComplexF adj(const ComplexF& r ){ return(conj(r)); } inline ComplexF adj(const ComplexF& r ){ return(conj(r)); }
//conj already supported for complex //conj already supported for complex
inline ComplexF timesI(const ComplexF r) { return(r*ComplexF(0.0,1.0));} inline ComplexF timesI(const ComplexF &r) { return(r*ComplexF(0.0,1.0));}
inline ComplexD timesI(const ComplexD r) { return(r*ComplexD(0.0,1.0));} inline ComplexD timesI(const ComplexD &r) { return(r*ComplexD(0.0,1.0));}
inline ComplexF timesMinusI(const ComplexF r){ return(r*ComplexF(0.0,-1.0));} inline ComplexF timesMinusI(const ComplexF &r){ return(r*ComplexF(0.0,-1.0));}
inline ComplexD timesMinusI(const ComplexD r){ return(r*ComplexD(0.0,-1.0));} inline ComplexD timesMinusI(const ComplexD &r){ return(r*ComplexD(0.0,-1.0));}
inline void timesI(ComplexF &ret,const ComplexF &r) { ret = timesI(r);}
inline void timesI(ComplexD &ret,const ComplexD &r) { ret = timesI(r);}
inline void timesMinusI(ComplexF &ret,const ComplexF &r){ ret = timesMinusI(r);}
inline void timesMinusI(ComplexD &ret,const ComplexD &r){ ret = timesMinusI(r);}
inline void mac (RealD * __restrict__ y,const RealD * __restrict__ a,const RealD *__restrict__ x){ *y = (*a) * (*x)+(*y);} inline void mac (RealD * __restrict__ y,const RealD * __restrict__ a,const RealD *__restrict__ x){ *y = (*a) * (*x)+(*y);}
inline void mult(RealD * __restrict__ y,const RealD * __restrict__ l,const RealD *__restrict__ r){ *y = (*l) * (*r);} inline void mult(RealD * __restrict__ y,const RealD * __restrict__ l,const RealD *__restrict__ r){ *y = (*l) * (*r);}

View File

@ -35,8 +35,7 @@ include_HEADERS =\
Grid_lattice.h\ Grid_lattice.h\
Grid_math.h\ Grid_math.h\
Grid_simd.h\ Grid_simd.h\
Grid_stencil.h\ Grid_stencil.h
Grid_where.h
nobase_include_HEADERS=\ nobase_include_HEADERS=\
cartesian/Grid_cartesian_base.h\ cartesian/Grid_cartesian_base.h\
@ -56,6 +55,7 @@ nobase_include_HEADERS=\
lattice/Grid_lattice_trace.h\ lattice/Grid_lattice_trace.h\
lattice/Grid_lattice_transfer.h\ lattice/Grid_lattice_transfer.h\
lattice/Grid_lattice_transpose.h\ lattice/Grid_lattice_transpose.h\
lattice/Grid_lattice_where.h\
math/Grid_math_arith.h\ math/Grid_math_arith.h\
math/Grid_math_arith_add.h\ math/Grid_math_arith_add.h\
math/Grid_math_arith_mac.h\ math/Grid_math_arith_mac.h\

View File

@ -28,7 +28,7 @@ Gather_plane_simple (const Lattice<vobj> &rhs,std::vector<cobj,alignedAllocator<
int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane
int bo = 0; // offset in buffer int bo = 0; // offset in buffer
#pragma omp parallel for collapse(2) PARALLEL_NESTED_LOOP(2)
for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){ for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){
for(int b=0;b<rhs._grid->_slice_block[dimension];b++){ for(int b=0;b<rhs._grid->_slice_block[dimension];b++){
int o = n*rhs._grid->_slice_stride[dimension]; int o = n*rhs._grid->_slice_stride[dimension];
@ -57,7 +57,7 @@ Gather_plane_extract(const Lattice<vobj> &rhs,std::vector<typename cobj::scalar_
int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane
int bo = 0; // offset in buffer int bo = 0; // offset in buffer
#pragma omp parallel for collapse(2) PARALLEL_NESTED_LOOP(2)
for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){ for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){
for(int b=0;b<rhs._grid->_slice_block[dimension];b++){ for(int b=0;b<rhs._grid->_slice_block[dimension];b++){
@ -106,7 +106,7 @@ template<class vobj> void Scatter_plane_simple (Lattice<vobj> &rhs,std::vector<v
int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane
int bo = 0; // offset in buffer int bo = 0; // offset in buffer
#pragma omp parallel for collapse(2) PARALLEL_NESTED_LOOP(2)
for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){ for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){
for(int b=0;b<rhs._grid->_slice_block[dimension];b++){ for(int b=0;b<rhs._grid->_slice_block[dimension];b++){
int o=n*rhs._grid->_slice_stride[dimension]; int o=n*rhs._grid->_slice_stride[dimension];
@ -131,7 +131,7 @@ template<class vobj> void Scatter_plane_simple (Lattice<vobj> &rhs,std::vector<v
int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane int so = plane*rhs._grid->_ostride[dimension]; // base offset for start of plane
#pragma omp parallel for collapse(2) PARALLEL_NESTED_LOOP(2)
for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){ for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){
for(int b=0;b<rhs._grid->_slice_block[dimension];b++){ for(int b=0;b<rhs._grid->_slice_block[dimension];b++){
@ -160,7 +160,7 @@ template<class vobj> void Copy_plane(Lattice<vobj>& lhs,Lattice<vobj> &rhs, int
int ro = rplane*rhs._grid->_ostride[dimension]; // base offset for start of plane int ro = rplane*rhs._grid->_ostride[dimension]; // base offset for start of plane
int lo = lplane*lhs._grid->_ostride[dimension]; // base offset for start of plane int lo = lplane*lhs._grid->_ostride[dimension]; // base offset for start of plane
#pragma omp parallel for collapse(2) PARALLEL_NESTED_LOOP(2)
for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){ for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){
for(int b=0;b<rhs._grid->_slice_block[dimension];b++){ for(int b=0;b<rhs._grid->_slice_block[dimension];b++){
@ -185,7 +185,7 @@ template<class vobj> void Copy_plane_permute(Lattice<vobj>& lhs,Lattice<vobj> &r
int ro = rplane*rhs._grid->_ostride[dimension]; // base offset for start of plane int ro = rplane*rhs._grid->_ostride[dimension]; // base offset for start of plane
int lo = lplane*lhs._grid->_ostride[dimension]; // base offset for start of plane int lo = lplane*lhs._grid->_ostride[dimension]; // base offset for start of plane
#pragma omp parallel for collapse(2) PARALLEL_NESTED_LOOP(2)
for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){ for(int n=0;n<rhs._grid->_slice_nblock[dimension];n++){
for(int b=0;b<rhs._grid->_slice_block[dimension];b++){ for(int b=0;b<rhs._grid->_slice_block[dimension];b++){
int o =n*rhs._grid->_slice_stride[dimension]; int o =n*rhs._grid->_slice_stride[dimension];

View File

@ -10,7 +10,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void mult(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const Lattice<obj3> &rhs){ void mult(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const Lattice<obj3> &rhs){
conformable(lhs,rhs); conformable(lhs,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
mult(&tmp,&lhs._odata[ss],&rhs._odata[ss]); mult(&tmp,&lhs._odata[ss],&rhs._odata[ss]);
@ -21,7 +21,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void mac(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const Lattice<obj3> &rhs){ void mac(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const Lattice<obj3> &rhs){
conformable(lhs,rhs); conformable(lhs,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
mac(&tmp,&lhs._odata[ss],&rhs._odata[ss]); mac(&tmp,&lhs._odata[ss],&rhs._odata[ss]);
@ -32,7 +32,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void sub(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const Lattice<obj3> &rhs){ void sub(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const Lattice<obj3> &rhs){
conformable(lhs,rhs); conformable(lhs,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
sub(&tmp,&lhs._odata[ss],&rhs._odata[ss]); sub(&tmp,&lhs._odata[ss],&rhs._odata[ss]);
@ -42,7 +42,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void add(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const Lattice<obj3> &rhs){ void add(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const Lattice<obj3> &rhs){
conformable(lhs,rhs); conformable(lhs,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
add(&tmp,&lhs._odata[ss],&rhs._odata[ss]); add(&tmp,&lhs._odata[ss],&rhs._odata[ss]);
@ -56,7 +56,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void mult(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const obj3 &rhs){ void mult(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const obj3 &rhs){
conformable(lhs,ret); conformable(lhs,ret);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
mult(&tmp,&lhs._odata[ss],&rhs); mult(&tmp,&lhs._odata[ss],&rhs);
@ -67,7 +67,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void mac(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const obj3 &rhs){ void mac(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const obj3 &rhs){
conformable(lhs,ret); conformable(lhs,ret);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
mac(&tmp,&lhs._odata[ss],&rhs); mac(&tmp,&lhs._odata[ss],&rhs);
@ -78,7 +78,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void sub(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const obj3 &rhs){ void sub(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const obj3 &rhs){
conformable(lhs,ret); conformable(lhs,ret);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
sub(&tmp,&lhs._odata[ss],&rhs); sub(&tmp,&lhs._odata[ss],&rhs);
@ -88,7 +88,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void add(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const obj3 &rhs){ void add(Lattice<obj1> &ret,const Lattice<obj2> &lhs,const obj3 &rhs){
conformable(lhs,ret); conformable(lhs,ret);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
add(&tmp,&lhs._odata[ss],&rhs); add(&tmp,&lhs._odata[ss],&rhs);
@ -102,7 +102,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void mult(Lattice<obj1> &ret,const obj2 &lhs,const Lattice<obj3> &rhs){ void mult(Lattice<obj1> &ret,const obj2 &lhs,const Lattice<obj3> &rhs){
conformable(ret,rhs); conformable(ret,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites();ss++){ for(int ss=0;ss<rhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
mult(&tmp,&lhs,&rhs._odata[ss]); mult(&tmp,&lhs,&rhs._odata[ss]);
@ -113,7 +113,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void mac(Lattice<obj1> &ret,const obj2 &lhs,const Lattice<obj3> &rhs){ void mac(Lattice<obj1> &ret,const obj2 &lhs,const Lattice<obj3> &rhs){
conformable(ret,rhs); conformable(ret,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites();ss++){ for(int ss=0;ss<rhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
mac(&tmp,&lhs,&rhs._odata[ss]); mac(&tmp,&lhs,&rhs._odata[ss]);
@ -124,7 +124,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void sub(Lattice<obj1> &ret,const obj2 &lhs,const Lattice<obj3> &rhs){ void sub(Lattice<obj1> &ret,const obj2 &lhs,const Lattice<obj3> &rhs){
conformable(ret,rhs); conformable(ret,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites();ss++){ for(int ss=0;ss<rhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
sub(&tmp,&lhs,&rhs._odata[ss]); sub(&tmp,&lhs,&rhs._odata[ss]);
@ -134,7 +134,7 @@ namespace Grid {
template<class obj1,class obj2,class obj3> template<class obj1,class obj2,class obj3>
void add(Lattice<obj1> &ret,const obj2 &lhs,const Lattice<obj3> &rhs){ void add(Lattice<obj1> &ret,const obj2 &lhs,const Lattice<obj3> &rhs){
conformable(ret,rhs); conformable(ret,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites();ss++){ for(int ss=0;ss<rhs._grid->oSites();ss++){
obj1 tmp; obj1 tmp;
add(&tmp,&lhs,&rhs._odata[ss]); add(&tmp,&lhs,&rhs._odata[ss]);
@ -145,7 +145,7 @@ namespace Grid {
template<class sobj,class vobj> template<class sobj,class vobj>
inline void axpy(Lattice<vobj> &ret,sobj a,const Lattice<vobj> &lhs,const Lattice<vobj> &rhs){ inline void axpy(Lattice<vobj> &ret,sobj a,const Lattice<vobj> &lhs,const Lattice<vobj> &rhs){
conformable(lhs,rhs); conformable(lhs,rhs);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
vobj tmp = a*lhs._odata[ss]; vobj tmp = a*lhs._odata[ss];
vstream(ret._odata[ss],tmp+rhs._odata[ss]); vstream(ret._odata[ss],tmp+rhs._odata[ss]);

View File

@ -64,7 +64,7 @@ public:
//////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////
template <typename Op, typename T1> inline Lattice<vobj> & operator=(const LatticeUnaryExpression<Op,T1> &expr) template <typename Op, typename T1> inline Lattice<vobj> & operator=(const LatticeUnaryExpression<Op,T1> &expr)
{ {
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<_grid->oSites();ss++){ for(int ss=0;ss<_grid->oSites();ss++){
vobj tmp= eval(ss,expr); vobj tmp= eval(ss,expr);
vstream(_odata[ss] ,tmp); vstream(_odata[ss] ,tmp);
@ -73,7 +73,7 @@ public:
} }
template <typename Op, typename T1,typename T2> inline Lattice<vobj> & operator=(const LatticeBinaryExpression<Op,T1,T2> &expr) template <typename Op, typename T1,typename T2> inline Lattice<vobj> & operator=(const LatticeBinaryExpression<Op,T1,T2> &expr)
{ {
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<_grid->oSites();ss++){ for(int ss=0;ss<_grid->oSites();ss++){
vobj tmp= eval(ss,expr); vobj tmp= eval(ss,expr);
vstream(_odata[ss] ,tmp); vstream(_odata[ss] ,tmp);
@ -82,7 +82,7 @@ public:
} }
template <typename Op, typename T1,typename T2,typename T3> inline Lattice<vobj> & operator=(const LatticeTrinaryExpression<Op,T1,T2,T3> &expr) template <typename Op, typename T1,typename T2,typename T3> inline Lattice<vobj> & operator=(const LatticeTrinaryExpression<Op,T1,T2,T3> &expr)
{ {
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<_grid->oSites();ss++){ for(int ss=0;ss<_grid->oSites();ss++){
vobj tmp= eval(ss,expr); vobj tmp= eval(ss,expr);
vstream(_odata[ss] ,tmp); vstream(_odata[ss] ,tmp);
@ -95,7 +95,7 @@ public:
GridFromExpression(_grid,expr); GridFromExpression(_grid,expr);
assert(_grid!=nullptr); assert(_grid!=nullptr);
_odata.resize(_grid->oSites()); _odata.resize(_grid->oSites());
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<_grid->oSites();ss++){ for(int ss=0;ss<_grid->oSites();ss++){
_odata[ss] = eval(ss,expr); _odata[ss] = eval(ss,expr);
} }
@ -105,7 +105,7 @@ public:
GridFromExpression(_grid,expr); GridFromExpression(_grid,expr);
assert(_grid!=nullptr); assert(_grid!=nullptr);
_odata.resize(_grid->oSites()); _odata.resize(_grid->oSites());
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<_grid->oSites();ss++){ for(int ss=0;ss<_grid->oSites();ss++){
_odata[ss] = eval(ss,expr); _odata[ss] = eval(ss,expr);
} }
@ -115,7 +115,7 @@ public:
GridFromExpression(_grid,expr); GridFromExpression(_grid,expr);
assert(_grid!=nullptr); assert(_grid!=nullptr);
_odata.resize(_grid->oSites()); _odata.resize(_grid->oSites());
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<_grid->oSites();ss++){ for(int ss=0;ss<_grid->oSites();ss++){
_odata[ss] = eval(ss,expr); _odata[ss] = eval(ss,expr);
} }
@ -133,7 +133,7 @@ public:
} }
template<class sobj> inline Lattice<vobj> & operator = (const sobj & r){ template<class sobj> inline Lattice<vobj> & operator = (const sobj & r){
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<_grid->oSites();ss++){ for(int ss=0;ss<_grid->oSites();ss++){
this->_odata[ss]=r; this->_odata[ss]=r;
} }
@ -142,7 +142,7 @@ public:
template<class robj> inline Lattice<vobj> & operator = (const Lattice<robj> & r){ template<class robj> inline Lattice<vobj> & operator = (const Lattice<robj> & r){
conformable(*this,r); conformable(*this,r);
std::cout<<"Lattice operator ="<<std::endl; std::cout<<"Lattice operator ="<<std::endl;
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<_grid->oSites();ss++){ for(int ss=0;ss<_grid->oSites();ss++){
this->_odata[ss]=r._odata[ss]; this->_odata[ss]=r._odata[ss];
} }
@ -167,7 +167,7 @@ public:
inline friend Lattice<vobj> operator / (const Lattice<vobj> &lhs,const Lattice<vobj> &rhs){ inline friend Lattice<vobj> operator / (const Lattice<vobj> &lhs,const Lattice<vobj> &rhs){
conformable(lhs,rhs); conformable(lhs,rhs);
Lattice<vobj> ret(lhs._grid); Lattice<vobj> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = lhs._odata[ss]/rhs._odata[ss]; ret._odata[ss] = lhs._odata[ss]/rhs._odata[ss];
} }

View File

@ -15,7 +15,7 @@ namespace Grid {
inline Lattice<vInteger> LLComparison(vfunctor op,const Lattice<lobj> &lhs,const Lattice<robj> &rhs) inline Lattice<vInteger> LLComparison(vfunctor op,const Lattice<lobj> &lhs,const Lattice<robj> &rhs)
{ {
Lattice<vInteger> ret(rhs._grid); Lattice<vInteger> ret(rhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
ret._odata[ss]=op(lhs._odata[ss],rhs._odata[ss]); ret._odata[ss]=op(lhs._odata[ss],rhs._odata[ss]);
} }
@ -25,7 +25,7 @@ namespace Grid {
inline Lattice<vInteger> LSComparison(vfunctor op,const Lattice<lobj> &lhs,const robj &rhs) inline Lattice<vInteger> LSComparison(vfunctor op,const Lattice<lobj> &lhs,const robj &rhs)
{ {
Lattice<vInteger> ret(lhs._grid); Lattice<vInteger> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites(); ss++){ for(int ss=0;ss<lhs._grid->oSites(); ss++){
ret._odata[ss]=op(lhs._odata[ss],rhs); ret._odata[ss]=op(lhs._odata[ss],rhs);
} }
@ -35,7 +35,7 @@ namespace Grid {
inline Lattice<vInteger> SLComparison(vfunctor op,const lobj &lhs,const Lattice<robj> &rhs) inline Lattice<vInteger> SLComparison(vfunctor op,const lobj &lhs,const Lattice<robj> &rhs)
{ {
Lattice<vInteger> ret(rhs._grid); Lattice<vInteger> ret(rhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
ret._odata[ss]=op(lhs._odata[ss],rhs); ret._odata[ss]=op(lhs._odata[ss],rhs);
} }

View File

@ -16,7 +16,7 @@ namespace Grid {
inline auto localNorm2 (const Lattice<vobj> &rhs)-> Lattice<typename vobj::tensor_reduced> inline auto localNorm2 (const Lattice<vobj> &rhs)-> Lattice<typename vobj::tensor_reduced>
{ {
Lattice<typename vobj::tensor_reduced> ret(rhs._grid); Lattice<typename vobj::tensor_reduced> ret(rhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
ret._odata[ss]=innerProduct(rhs._odata[ss],rhs._odata[ss]); ret._odata[ss]=innerProduct(rhs._odata[ss],rhs._odata[ss]);
} }
@ -29,7 +29,7 @@ namespace Grid {
-> Lattice<typename vobj::tensor_reduced> -> Lattice<typename vobj::tensor_reduced>
{ {
Lattice<typename vobj::tensor_reduced> ret(rhs._grid); Lattice<typename vobj::tensor_reduced> ret(rhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
ret._odata[ss]=innerProduct(lhs._odata[ss],rhs._odata[ss]); ret._odata[ss]=innerProduct(lhs._odata[ss],rhs._odata[ss]);
} }
@ -42,7 +42,7 @@ namespace Grid {
inline auto outerProduct (const Lattice<ll> &lhs,const Lattice<rr> &rhs) -> Lattice<decltype(outerProduct(lhs._odata[0],rhs._odata[0]))> inline auto outerProduct (const Lattice<ll> &lhs,const Lattice<rr> &rhs) -> Lattice<decltype(outerProduct(lhs._odata[0],rhs._odata[0]))>
{ {
Lattice<decltype(outerProduct(lhs._odata[0],rhs._odata[0]))> ret(rhs._grid); Lattice<decltype(outerProduct(lhs._odata[0],rhs._odata[0]))> ret(rhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
ret._odata[ss]=outerProduct(lhs._odata[ss],rhs._odata[ss]); ret._odata[ss]=outerProduct(lhs._odata[ss],rhs._odata[ss]);
} }

View File

@ -10,7 +10,7 @@ namespace Grid {
inline Lattice<vobj> operator -(const Lattice<vobj> &r) inline Lattice<vobj> operator -(const Lattice<vobj> &r)
{ {
Lattice<vobj> ret(r._grid); Lattice<vobj> ret(r._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<r._grid->oSites();ss++){ for(int ss=0;ss<r._grid->oSites();ss++){
vstream(ret._odata[ss], -r._odata[ss]); vstream(ret._odata[ss], -r._odata[ss]);
} }
@ -47,7 +47,7 @@ namespace Grid {
inline auto operator * (const left &lhs,const Lattice<right> &rhs) -> Lattice<decltype(lhs*rhs._odata[0])> inline auto operator * (const left &lhs,const Lattice<right> &rhs) -> Lattice<decltype(lhs*rhs._odata[0])>
{ {
Lattice<decltype(lhs*rhs._odata[0])> ret(rhs._grid); Lattice<decltype(lhs*rhs._odata[0])> ret(rhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
decltype(lhs*rhs._odata[0]) tmp=lhs*rhs._odata[ss]; decltype(lhs*rhs._odata[0]) tmp=lhs*rhs._odata[ss];
vstream(ret._odata[ss],tmp); vstream(ret._odata[ss],tmp);
@ -59,7 +59,7 @@ namespace Grid {
inline auto operator + (const left &lhs,const Lattice<right> &rhs) -> Lattice<decltype(lhs+rhs._odata[0])> inline auto operator + (const left &lhs,const Lattice<right> &rhs) -> Lattice<decltype(lhs+rhs._odata[0])>
{ {
Lattice<decltype(lhs+rhs._odata[0])> ret(rhs._grid); Lattice<decltype(lhs+rhs._odata[0])> ret(rhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
decltype(lhs+rhs._odata[0]) tmp =lhs-rhs._odata[ss]; decltype(lhs+rhs._odata[0]) tmp =lhs-rhs._odata[ss];
vstream(ret._odata[ss],tmp); vstream(ret._odata[ss],tmp);
@ -71,7 +71,7 @@ namespace Grid {
inline auto operator - (const left &lhs,const Lattice<right> &rhs) -> Lattice<decltype(lhs-rhs._odata[0])> inline auto operator - (const left &lhs,const Lattice<right> &rhs) -> Lattice<decltype(lhs-rhs._odata[0])>
{ {
Lattice<decltype(lhs-rhs._odata[0])> ret(rhs._grid); Lattice<decltype(lhs-rhs._odata[0])> ret(rhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
decltype(lhs-rhs._odata[0]) tmp=lhs-rhs._odata[ss]; decltype(lhs-rhs._odata[0]) tmp=lhs-rhs._odata[ss];
vstream(ret._odata[ss],tmp); vstream(ret._odata[ss],tmp);
@ -83,7 +83,7 @@ namespace Grid {
inline auto operator * (const Lattice<left> &lhs,const right &rhs) -> Lattice<decltype(lhs._odata[0]*rhs)> inline auto operator * (const Lattice<left> &lhs,const right &rhs) -> Lattice<decltype(lhs._odata[0]*rhs)>
{ {
Lattice<decltype(lhs._odata[0]*rhs)> ret(lhs._grid); Lattice<decltype(lhs._odata[0]*rhs)> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites(); ss++){ for(int ss=0;ss<lhs._grid->oSites(); ss++){
decltype(lhs._odata[0]*rhs) tmp =lhs._odata[ss]*rhs; decltype(lhs._odata[0]*rhs) tmp =lhs._odata[ss]*rhs;
vstream(ret._odata[ss],tmp); vstream(ret._odata[ss],tmp);
@ -95,7 +95,7 @@ namespace Grid {
inline auto operator + (const Lattice<left> &lhs,const right &rhs) -> Lattice<decltype(lhs._odata[0]+rhs)> inline auto operator + (const Lattice<left> &lhs,const right &rhs) -> Lattice<decltype(lhs._odata[0]+rhs)>
{ {
Lattice<decltype(lhs._odata[0]+rhs)> ret(lhs._grid); Lattice<decltype(lhs._odata[0]+rhs)> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
decltype(lhs._odata[0]+rhs) tmp=lhs._odata[ss]+rhs; decltype(lhs._odata[0]+rhs) tmp=lhs._odata[ss]+rhs;
vstream(ret._odata[ss],tmp); vstream(ret._odata[ss],tmp);
@ -107,7 +107,7 @@ namespace Grid {
inline auto operator - (const Lattice<left> &lhs,const right &rhs) -> Lattice<decltype(lhs._odata[0]-rhs)> inline auto operator - (const Lattice<left> &lhs,const right &rhs) -> Lattice<decltype(lhs._odata[0]-rhs)>
{ {
Lattice<decltype(lhs._odata[0]-rhs)> ret(lhs._grid); Lattice<decltype(lhs._odata[0]-rhs)> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<rhs._grid->oSites(); ss++){ for(int ss=0;ss<rhs._grid->oSites(); ss++){
decltype(lhs._odata[0]-rhs) tmp=lhs._odata[ss]-rhs; decltype(lhs._odata[0]-rhs) tmp=lhs._odata[ss]-rhs;
vstream(ret._odata[ss],tmp); vstream(ret._odata[ss],tmp);

View File

@ -15,7 +15,7 @@ namespace Grid {
-> Lattice<decltype(peekIndex<Index>(lhs._odata[0]))> -> Lattice<decltype(peekIndex<Index>(lhs._odata[0]))>
{ {
Lattice<decltype(peekIndex<Index>(lhs._odata[0]))> ret(lhs._grid); Lattice<decltype(peekIndex<Index>(lhs._odata[0]))> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = peekIndex<Index>(lhs._odata[ss]); ret._odata[ss] = peekIndex<Index>(lhs._odata[ss]);
} }
@ -26,7 +26,7 @@ namespace Grid {
-> Lattice<decltype(peekIndex<Index>(lhs._odata[0],i))> -> Lattice<decltype(peekIndex<Index>(lhs._odata[0],i))>
{ {
Lattice<decltype(peekIndex<Index>(lhs._odata[0],i))> ret(lhs._grid); Lattice<decltype(peekIndex<Index>(lhs._odata[0],i))> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = peekIndex<Index>(lhs._odata[ss],i); ret._odata[ss] = peekIndex<Index>(lhs._odata[ss],i);
} }
@ -37,7 +37,7 @@ namespace Grid {
-> Lattice<decltype(peekIndex<Index>(lhs._odata[0],i,j))> -> Lattice<decltype(peekIndex<Index>(lhs._odata[0],i,j))>
{ {
Lattice<decltype(peekIndex<Index>(lhs._odata[0],i,j))> ret(lhs._grid); Lattice<decltype(peekIndex<Index>(lhs._odata[0],i,j))> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = peekIndex<Index>(lhs._odata[ss],i,j); ret._odata[ss] = peekIndex<Index>(lhs._odata[ss],i,j);
} }
@ -50,7 +50,7 @@ namespace Grid {
template<int Index,class vobj> inline template<int Index,class vobj> inline
void pokeIndex(Lattice<vobj> &lhs,const Lattice<decltype(peekIndex<Index>(lhs._odata[0]))> & rhs) void pokeIndex(Lattice<vobj> &lhs,const Lattice<decltype(peekIndex<Index>(lhs._odata[0]))> & rhs)
{ {
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
pokeIndex<Index>(lhs._odata[ss],rhs._odata[ss]); pokeIndex<Index>(lhs._odata[ss],rhs._odata[ss]);
} }
@ -58,7 +58,7 @@ namespace Grid {
template<int Index,class vobj> inline template<int Index,class vobj> inline
void pokeIndex(Lattice<vobj> &lhs,const Lattice<decltype(peekIndex<Index>(lhs._odata[0],0))> & rhs,int i) void pokeIndex(Lattice<vobj> &lhs,const Lattice<decltype(peekIndex<Index>(lhs._odata[0],0))> & rhs,int i)
{ {
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
pokeIndex<Index>(lhs._odata[ss],rhs._odata[ss],i); pokeIndex<Index>(lhs._odata[ss],rhs._odata[ss],i);
} }
@ -66,7 +66,7 @@ namespace Grid {
template<int Index,class vobj> inline template<int Index,class vobj> inline
void pokeIndex(Lattice<vobj> &lhs,const Lattice<decltype(peekIndex<Index>(lhs._odata[0],0,0))> & rhs,int i,int j) void pokeIndex(Lattice<vobj> &lhs,const Lattice<decltype(peekIndex<Index>(lhs._odata[0],0,0))> & rhs,int i,int j)
{ {
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
pokeIndex<Index>(lhs._odata[ss],rhs._odata[ss],i,j); pokeIndex<Index>(lhs._odata[ss],rhs._odata[ss],i,j);
} }

View File

@ -11,7 +11,7 @@ namespace Grid {
template<class vobj> inline Lattice<vobj> adj(const Lattice<vobj> &lhs){ template<class vobj> inline Lattice<vobj> adj(const Lattice<vobj> &lhs){
Lattice<vobj> ret(lhs._grid); Lattice<vobj> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = adj(lhs._odata[ss]); ret._odata[ss] = adj(lhs._odata[ss]);
} }
@ -20,7 +20,7 @@ namespace Grid {
template<class vobj> inline Lattice<vobj> conj(const Lattice<vobj> &lhs){ template<class vobj> inline Lattice<vobj> conj(const Lattice<vobj> &lhs){
Lattice<vobj> ret(lhs._grid); Lattice<vobj> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = conj(lhs._odata[ss]); ret._odata[ss] = conj(lhs._odata[ss]);
} }
@ -30,7 +30,7 @@ namespace Grid {
template<class vobj> inline auto real(const Lattice<vobj> &z) -> Lattice<decltype(real(z._odata[0]))> template<class vobj> inline auto real(const Lattice<vobj> &z) -> Lattice<decltype(real(z._odata[0]))>
{ {
Lattice<decltype(real(z._odata[0]))> ret(z._grid); Lattice<decltype(real(z._odata[0]))> ret(z._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<z._grid->oSites();ss++){ for(int ss=0;ss<z._grid->oSites();ss++){
ret._odata[ss] = real(z._odata[ss]); ret._odata[ss] = real(z._odata[ss]);
} }
@ -40,7 +40,7 @@ namespace Grid {
template<class vobj> inline auto imag(const Lattice<vobj> &z) -> Lattice<decltype(imag(z._odata[0]))> template<class vobj> inline auto imag(const Lattice<vobj> &z) -> Lattice<decltype(imag(z._odata[0]))>
{ {
Lattice<decltype(imag(z._odata[0]))> ret(z._grid); Lattice<decltype(imag(z._odata[0]))> ret(z._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<z._grid->oSites();ss++){ for(int ss=0;ss<z._grid->oSites();ss++){
ret._odata[ss] = imag(z._odata[ss]); ret._odata[ss] = imag(z._odata[ss]);
} }

View File

@ -7,69 +7,85 @@ namespace Grid {
#endif #endif
//////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////////
// Reduction operations // Deterministic Reduction operations
//////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////////////////
template<class vobj> template<class vobj> inline RealD norm2(const Lattice<vobj> &arg){
inline RealD norm2(const Lattice<vobj> &arg){ ComplexD nrm = innerProduct(arg,arg);
return real(nrm);
typedef typename vobj::scalar_type scalar; }
typedef typename vobj::vector_type vector;
decltype(innerProduct(arg._odata[0],arg._odata[0])) vnrm;
scalar nrm;
//FIXME make this loop parallelisable
vnrm=zero;
for(int ss=0;ss<arg._grid->oSites(); ss++){
vnrm = vnrm + innerProduct(arg._odata[ss],arg._odata[ss]);
}
vector vvnrm =TensorRemove(vnrm) ;
nrm = Reduce(vvnrm);
arg._grid->GlobalSum(nrm);
return real(nrm);
}
template<class vobj> template<class vobj>
inline ComplexD innerProduct(const Lattice<vobj> &left,const Lattice<vobj> &right) inline ComplexD innerProduct(const Lattice<vobj> &left,const Lattice<vobj> &right)
// inline auto innerProduct(const Lattice<vobj> &left,const Lattice<vobj> &right)
//->decltype(innerProduct(left._odata[0],right._odata[0]))
{ {
typedef typename vobj::scalar_type scalar; typedef typename vobj::scalar_type scalar_type;
decltype(innerProduct(left._odata[0],right._odata[0])) vnrm; typedef typename vobj::vector_type vector_type;
vector_type vnrm;
scalar_type nrm;
scalar nrm; GridBase *grid = left._grid;
//FIXME make this loop parallelisable
vnrm=zero; std::vector<vector_type,alignedAllocator<vector_type> > sumarray(grid->SumArraySize());
for(int ss=0;ss<left._grid->oSites(); ss++){ for(int i=0;i<grid->SumArraySize();i++){
vnrm = vnrm + innerProduct(left._odata[ss],right._odata[ss]); sumarray[i]=zero;
} }
nrm = Reduce(vnrm);
PARALLEL_FOR_LOOP
for(int thr=0;thr<grid->SumArraySize();thr++){
int nwork, mywork, myoff;
GridThread::GetWork(left._grid->oSites(),thr,mywork,myoff);
decltype(innerProduct(left._odata[0],right._odata[0])) vnrm=zero; // private to thread; sub summation
for(int ss=myoff;ss<mywork+myoff; ss++){
vnrm = vnrm + innerProduct(left._odata[ss],right._odata[ss]);
}
sumarray[thr]=TensorRemove(vnrm) ;
}
vector_type vvnrm; vvnrm=zero; // sum across threads
for(int i=0;i<grid->SumArraySize();i++){
vvnrm = vvnrm+sumarray[i];
}
nrm = Reduce(vvnrm);// sum across simd
right._grid->GlobalSum(nrm); right._grid->GlobalSum(nrm);
return nrm; return nrm;
} }
template<class vobj> template<class vobj>
inline typename vobj::scalar_object sum(const Lattice<vobj> &arg){ inline typename vobj::scalar_object sum(const Lattice<vobj> &arg){
GridBase *grid=arg._grid; GridBase *grid=arg._grid;
int Nsimd = grid->Nsimd(); int Nsimd = grid->Nsimd();
typedef typename vobj::scalar_object sobj; std::vector<vobj,alignedAllocator<vobj> > sumarray(grid->SumArraySize());
typedef typename vobj::scalar_type scalar_type; for(int i=0;i<grid->SumArraySize();i++){
sumarray[i]=zero;
vobj vsum;
sobj ssum;
vsum=zero;
ssum=zero;
//FIXME make this loop parallelisable
for(int ss=0;ss<arg._grid->oSites(); ss++){
vsum = vsum + arg._odata[ss];
} }
PARALLEL_FOR_LOOP
for(int thr=0;thr<grid->SumArraySize();thr++){
int nwork, mywork, myoff;
GridThread::GetWork(grid->oSites(),thr,mywork,myoff);
vobj vvsum=zero;
for(int ss=myoff;ss<mywork+myoff; ss++){
vvsum = vvsum + arg._odata[ss];
}
sumarray[thr]=vvsum;
}
vobj vsum=zero; // sum across threads
for(int i=0;i<grid->SumArraySize();i++){
vsum = vsum+sumarray[i];
}
typedef typename vobj::scalar_object sobj;
sobj ssum=zero;
std::vector<sobj> buf(Nsimd); std::vector<sobj> buf(Nsimd);
extract(vsum,buf); extract(vsum,buf);
for(int i=0;i<Nsimd;i++) ssum = ssum + buf[i]; for(int i=0;i<Nsimd;i++) ssum = ssum + buf[i];
arg._grid->GlobalSum(ssum); arg._grid->GlobalSum(ssum);
return ssum; return ssum;
@ -77,13 +93,15 @@ namespace Grid {
template<class vobj> inline void sliceSum(const Lattice<vobj> &Data,std::vector<typename vobj::scalar_object> &result,int orthogdim) template<class vobj> inline void sliceSum(const Lattice<vobj> &Data,std::vector<typename vobj::scalar_object> &result,int orthogdim)
{ {
typedef typename vobj::scalar_object sobj; typedef typename vobj::scalar_object sobj;
GridBase *grid = Data._grid; GridBase *grid = Data._grid;
assert(grid!=NULL); assert(grid!=NULL);
// FIXME
std::cout<<"WARNING ! SliceSum is unthreaded "<<grid->SumArraySize()<<" threads "<<std::endl;
const int Nd = grid->_ndimension; const int Nd = grid->_ndimension;
const int Nsimd = grid->Nsimd(); const int Nsimd = grid->Nsimd();
@ -94,10 +112,8 @@ template<class vobj> inline void sliceSum(const Lattice<vobj> &Data,std::vector<
int ld=grid->_ldimensions[orthogdim]; int ld=grid->_ldimensions[orthogdim];
int rd=grid->_rdimensions[orthogdim]; int rd=grid->_rdimensions[orthogdim];
sobj szero; szero=zero;
std::vector<vobj,alignedAllocator<vobj> > lvSum(rd); // will locally sum vectors first std::vector<vobj,alignedAllocator<vobj> > lvSum(rd); // will locally sum vectors first
std::vector<sobj> lsSum(ld,szero); // sum across these down to scalars std::vector<sobj> lsSum(ld,zero); // sum across these down to scalars
std::vector<sobj> extracted(Nsimd); // splitting the SIMD std::vector<sobj> extracted(Nsimd); // splitting the SIMD
result.resize(fd); // And then global sum to return the same vector to every node for IO to file result.resize(fd); // And then global sum to return the same vector to every node for IO to file
@ -108,6 +124,7 @@ template<class vobj> inline void sliceSum(const Lattice<vobj> &Data,std::vector<
std::vector<int> coor(Nd); std::vector<int> coor(Nd);
// sum over reduced dimension planes, breaking out orthog dir // sum over reduced dimension planes, breaking out orthog dir
for(int ss=0;ss<grid->oSites();ss++){ for(int ss=0;ss<grid->oSites();ss++){
GridBase::CoorFromIndex(coor,ss,grid->_rdimensions); GridBase::CoorFromIndex(coor,ss,grid->_rdimensions);
int r = coor[orthogdim]; int r = coor[orthogdim];

View File

@ -15,7 +15,7 @@ namespace Grid {
-> Lattice<decltype(trace(lhs._odata[0]))> -> Lattice<decltype(trace(lhs._odata[0]))>
{ {
Lattice<decltype(trace(lhs._odata[0]))> ret(lhs._grid); Lattice<decltype(trace(lhs._odata[0]))> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = trace(lhs._odata[ss]); ret._odata[ss] = trace(lhs._odata[ss]);
} }
@ -30,7 +30,7 @@ namespace Grid {
-> Lattice<decltype(traceIndex<Index>(lhs._odata[0]))> -> Lattice<decltype(traceIndex<Index>(lhs._odata[0]))>
{ {
Lattice<decltype(traceIndex<Index>(lhs._odata[0]))> ret(lhs._grid); Lattice<decltype(traceIndex<Index>(lhs._odata[0]))> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = traceIndex<Index>(lhs._odata[ss]); ret._odata[ss] = traceIndex<Index>(lhs._odata[ss]);
} }

View File

@ -23,7 +23,7 @@ inline void subdivides(GridBase *coarse,GridBase *fine)
template<class vobj> inline void pickCheckerboard(int cb,Lattice<vobj> &half,const Lattice<vobj> &full){ template<class vobj> inline void pickCheckerboard(int cb,Lattice<vobj> &half,const Lattice<vobj> &full){
half.checkerboard = cb; half.checkerboard = cb;
int ssh=0; int ssh=0;
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<full._grid->oSites();ss++){ for(int ss=0;ss<full._grid->oSites();ss++){
std::vector<int> coor; std::vector<int> coor;
int cbos; int cbos;
@ -41,7 +41,7 @@ inline void subdivides(GridBase *coarse,GridBase *fine)
template<class vobj> inline void setCheckerboard(Lattice<vobj> &full,const Lattice<vobj> &half){ template<class vobj> inline void setCheckerboard(Lattice<vobj> &full,const Lattice<vobj> &half){
int cb = half.checkerboard; int cb = half.checkerboard;
int ssh=0; int ssh=0;
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<full._grid->oSites();ss++){ for(int ss=0;ss<full._grid->oSites();ss++){
std::vector<int> coor; std::vector<int> coor;
int cbos; int cbos;

View File

@ -13,7 +13,7 @@ namespace Grid {
template<class vobj> template<class vobj>
inline Lattice<vobj> transpose(const Lattice<vobj> &lhs){ inline Lattice<vobj> transpose(const Lattice<vobj> &lhs){
Lattice<vobj> ret(lhs._grid); Lattice<vobj> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = transpose(lhs._odata[ss]); ret._odata[ss] = transpose(lhs._odata[ss]);
} }
@ -28,7 +28,7 @@ namespace Grid {
-> Lattice<decltype(transposeIndex<Index>(lhs._odata[0]))> -> Lattice<decltype(transposeIndex<Index>(lhs._odata[0]))>
{ {
Lattice<decltype(transposeIndex<Index>(lhs._odata[0]))> ret(lhs._grid); Lattice<decltype(transposeIndex<Index>(lhs._odata[0]))> ret(lhs._grid);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<lhs._grid->oSites();ss++){ for(int ss=0;ss<lhs._grid->oSites();ss++){
ret._odata[ss] = transposeIndex<Index>(lhs._odata[ss]); ret._odata[ss] = transposeIndex<Index>(lhs._odata[ss]);
} }

View File

@ -1,5 +1,5 @@
#ifndef GRID_WHERE_H #ifndef GRID_LATTICE_WHERE_H
#define GRID_WHERE_H #define GRID_LATTICE_WHERE_H
namespace Grid { namespace Grid {
// Must implement the predicate gating the // Must implement the predicate gating the
// Must be able to reduce the predicate down to a single vInteger per site. // Must be able to reduce the predicate down to a single vInteger per site.
@ -27,7 +27,7 @@ inline void where(Lattice<vobj> &ret,const Lattice<iobj> &predicate,Lattice<vobj
std::vector<scalar_object> truevals (Nsimd); std::vector<scalar_object> truevals (Nsimd);
std::vector<scalar_object> falsevals(Nsimd); std::vector<scalar_object> falsevals(Nsimd);
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int ss=0;ss<iftrue._grid->oSites(); ss++){ for(int ss=0;ss<iftrue._grid->oSites(); ss++){
extract(iftrue._odata[ss] ,truevals); extract(iftrue._odata[ss] ,truevals);

View File

@ -6,23 +6,20 @@ namespace Grid {
/////////////////////////////////////////// CONJ /////////////////////////////////////////// /////////////////////////////////////////// CONJ ///////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////
#ifdef GRID_WARN_SUBOPTIMAL
#warning "Optimisation alert switch over to two argument form to avoid copy back in perf critical timesI "
#endif
/////////////////////////////////////////////// ///////////////////////////////////////////////
// multiply by I; make recursive. // multiply by I; make recursive.
/////////////////////////////////////////////// ///////////////////////////////////////////////
template<class vtype> inline iScalar<vtype> timesI(const iScalar<vtype>&r) template<class vtype> inline iScalar<vtype> timesI(const iScalar<vtype>&r)
{ {
iScalar<vtype> ret; iScalar<vtype> ret;
ret._internal = timesI(r._internal); timesI(ret._internal,r._internal);
return ret; return ret;
} }
template<class vtype,int N> inline iVector<vtype,N> timesI(const iVector<vtype,N>&r) template<class vtype,int N> inline iVector<vtype,N> timesI(const iVector<vtype,N>&r)
{ {
iVector<vtype,N> ret; iVector<vtype,N> ret;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
ret._internal[i] = timesI(r._internal[i]); timesI(ret._internal[i],r._internal[i]);
} }
return ret; return ret;
} }
@ -31,22 +28,41 @@ template<class vtype,int N> inline iMatrix<vtype,N> timesI(const iMatrix<vtype,N
iMatrix<vtype,N> ret; iMatrix<vtype,N> ret;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
ret._internal[i][j] = timesI(r._internal[i][j]); timesI(ret._internal[i][j],r._internal[i][j]);
}} }}
return ret; return ret;
} }
template<class vtype> inline void timesI(iScalar<vtype> &ret,const iScalar<vtype>&r)
{
timesI(ret._internal,r._internal);
}
template<class vtype,int N> inline void timesI(iVector<vtype,N> &ret,const iVector<vtype,N>&r)
{
for(int i=0;i<N;i++){
timesI(ret._internal[i],r._internal[i]);
}
}
template<class vtype,int N> inline void timesI(iMatrix<vtype,N> &ret,const iMatrix<vtype,N>&r)
{
for(int i=0;i<N;i++){
for(int j=0;j<N;j++){
timesI(ret._internal[i][j],r._internal[i][j]);
}}
}
template<class vtype> inline iScalar<vtype> timesMinusI(const iScalar<vtype>&r) template<class vtype> inline iScalar<vtype> timesMinusI(const iScalar<vtype>&r)
{ {
iScalar<vtype> ret; iScalar<vtype> ret;
ret._internal = timesMinusI(r._internal); timesMinusI(ret._internal,r._internal);
return ret; return ret;
} }
template<class vtype,int N> inline iVector<vtype,N> timesMinusI(const iVector<vtype,N>&r) template<class vtype,int N> inline iVector<vtype,N> timesMinusI(const iVector<vtype,N>&r)
{ {
iVector<vtype,N> ret; iVector<vtype,N> ret;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
ret._internal[i] = timesMinusI(r._internal[i]); timesMinusI(ret._internal[i],r._internal[i]);
} }
return ret; return ret;
} }
@ -55,10 +71,30 @@ template<class vtype,int N> inline iMatrix<vtype,N> timesMinusI(const iMatrix<vt
iMatrix<vtype,N> ret; iMatrix<vtype,N> ret;
for(int i=0;i<N;i++){ for(int i=0;i<N;i++){
for(int j=0;j<N;j++){ for(int j=0;j<N;j++){
ret._internal[i][j] = timesMinusI(r._internal[i][j]); timesMinusI(ret._internal[i][j],r._internal[i][j]);
}} }}
return ret; return ret;
} }
template<class vtype> inline void timesMinusI(iScalar<vtype> &ret,const iScalar<vtype>&r)
{
timesMinusI(ret._internal,r._internal);
}
template<class vtype,int N> inline void timesMinusI(iVector<vtype,N> &ret,const iVector<vtype,N>&r)
{
for(int i=0;i<N;i++){
timesMinusI(ret._internal[i],r._internal[i]);
}
}
template<class vtype,int N> inline void timesMinusI(iMatrix<vtype,N> &ret,const iMatrix<vtype,N>&r)
{
for(int i=0;i<N;i++){
for(int j=0;j<N;j++){
timesMinusI(ret._internal[i][j],r._internal[i][j]);
}}
}
/////////////////////////////////////////////// ///////////////////////////////////////////////
// Conj function for scalar, vector, matrix // Conj function for scalar, vector, matrix
/////////////////////////////////////////////// ///////////////////////////////////////////////

View File

@ -1,10 +1,8 @@
#ifndef GRID_QCD_TWOSPIN_H #ifndef GRID_QCD_TWOSPIN_H
#define GRID_QCD_TWOSPIN_H #define GRID_QCD_TWOSPIN_H
namespace Grid{ namespace Grid{
namespace QCD { namespace QCD {
////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////////
// Normalisation alert; the g5 project is 1/2(1+-G5) // Normalisation alert; the g5 project is 1/2(1+-G5)
// the xyzt projects are (1+-Gxyzt) // the xyzt projects are (1+-Gxyzt)
@ -1073,10 +1071,6 @@ namespace QCD {
spProj5m(hspin._internal[i][j],fspin._internal[i][j]); spProj5m(hspin._internal[i][j],fspin._internal[i][j]);
}} }}
} }
} //namespace QCD } //namespace QCD
} // Grid } // Grid
#endif #endif

View File

@ -103,7 +103,7 @@ void WilsonMatrix::Dhop(const LatticeFermion &in, LatticeFermion &out)
vHalfSpinColourVector *chi_p; vHalfSpinColourVector *chi_p;
int offset,local,perm, ptype; int offset,local,perm, ptype;
#pragma omp parallel for PARALLEL_FOR_LOOP
for(int sss=0;sss<grid->oSites();sss++){ for(int sss=0;sss<grid->oSites();sss++){
int ss = sss; int ss = sss;

View File

@ -13,6 +13,9 @@ namespace Grid {
vzero(*this); vzero(*this);
return (*this); return (*this);
} }
vComplexD( Zero & z){
vzero(*this);
}
vComplexD()=default; vComplexD()=default;
vComplexD(ComplexD a){ vComplexD(ComplexD a){
vsplat(*this,a); vsplat(*this,a);
@ -286,8 +289,8 @@ friend inline void vstore(const vComplexD &ret, ComplexD *a){
return ret; return ret;
} }
friend inline vComplexD timesMinusI(const vComplexD &in){ friend inline void timesMinusI(vComplexD &ret,const vComplexD &in){
vComplexD ret; vzero(ret); vzero(ret);
vComplexD tmp; vComplexD tmp;
#if defined (AVX1)|| defined (AVX2) #if defined (AVX1)|| defined (AVX2)
tmp.v =_mm256_addsub_pd(ret.v,in.v); // r,-i tmp.v =_mm256_addsub_pd(ret.v,in.v); // r,-i
@ -304,11 +307,10 @@ friend inline void vstore(const vComplexD &ret, ComplexD *a){
#ifdef QPX #ifdef QPX
assert(0); assert(0);
#endif #endif
return ret;
} }
friend inline vComplexD timesI(const vComplexD &in){ friend inline void timesI(vComplexD &ret, const vComplexD &in){
vComplexD ret; vzero(ret); vzero(ret);
vComplexD tmp; vComplexD tmp;
#if defined (AVX1)|| defined (AVX2) #if defined (AVX1)|| defined (AVX2)
tmp.v =_mm256_shuffle_pd(in.v,in.v,0x5); tmp.v =_mm256_shuffle_pd(in.v,in.v,0x5);
@ -325,9 +327,21 @@ friend inline void vstore(const vComplexD &ret, ComplexD *a){
#ifdef QPX #ifdef QPX
assert(0); assert(0);
#endif #endif
}
friend inline vComplexD timesMinusI(const vComplexD &in){
vComplexD ret;
timesMinusI(ret,in);
return ret; return ret;
} }
friend inline vComplexD timesI(const vComplexD &in){
vComplexD ret;
timesI(ret,in);
return ret;
}
// REDUCE FIXME must be a cleaner implementation // REDUCE FIXME must be a cleaner implementation
friend inline ComplexD Reduce(const vComplexD & in) friend inline ComplexD Reduce(const vComplexD & in)
{ {

View File

@ -28,6 +28,9 @@ namespace Grid {
vzero(*this); vzero(*this);
return (*this); return (*this);
} }
vComplexF( Zero & z){
vzero(*this);
}
vComplexF()=default; vComplexF()=default;
vComplexF(ComplexF a){ vComplexF(ComplexF a){
vsplat(*this,a); vsplat(*this,a);
@ -363,8 +366,7 @@ namespace Grid {
#endif #endif
return ret; return ret;
} }
friend inline vComplexF timesMinusI(const vComplexF &in){ friend inline void timesMinusI( vComplexF &ret,const vComplexF &in){
vComplexF ret;
vzero(ret); vzero(ret);
#if defined (AVX1)|| defined (AVX2) #if defined (AVX1)|| defined (AVX2)
cvec tmp =_mm256_addsub_ps(ret.v,in.v); // r,-i cvec tmp =_mm256_addsub_ps(ret.v,in.v); // r,-i
@ -381,10 +383,9 @@ namespace Grid {
#ifdef QPX #ifdef QPX
assert(0); assert(0);
#endif #endif
return ret;
} }
friend inline vComplexF timesI(const vComplexF &in){ friend inline void timesI(vComplexF &ret,const vComplexF &in){
vComplexF ret; vzero(ret); vzero(ret);
#if defined (AVX1)|| defined (AVX2) #if defined (AVX1)|| defined (AVX2)
cvec tmp =_mm256_shuffle_ps(in.v,in.v,_MM_SHUFFLE(2,3,0,1));//i,r cvec tmp =_mm256_shuffle_ps(in.v,in.v,_MM_SHUFFLE(2,3,0,1));//i,r
ret.v =_mm256_addsub_ps(ret.v,tmp); //i,-r ret.v =_mm256_addsub_ps(ret.v,tmp); //i,-r
@ -400,8 +401,18 @@ namespace Grid {
#ifdef QPX #ifdef QPX
assert(0); assert(0);
#endif #endif
}
friend inline vComplexF timesMinusI(const vComplexF &in){
vComplexF ret;
timesMinusI(ret,in);
return ret; return ret;
} }
friend inline vComplexF timesI(const vComplexF &in){
vComplexF ret;
timesI(ret,in);
return ret;
}
// Unary negation // Unary negation
friend inline vComplexF operator -(const vComplexF &r) { friend inline vComplexF operator -(const vComplexF &r) {

View File

@ -17,6 +17,10 @@ namespace Grid {
vRealD(Zero &zero){ vRealD(Zero &zero){
zeroit(*this); zeroit(*this);
} }
vRealD & operator = ( Zero & z){
vzero(*this);
return (*this);
}
friend inline void mult(vRealD * __restrict__ y,const vRealD * __restrict__ l,const vRealD *__restrict__ r) {*y = (*l) * (*r);} friend inline void mult(vRealD * __restrict__ y,const vRealD * __restrict__ l,const vRealD *__restrict__ r) {*y = (*l) * (*r);}
friend inline void sub (vRealD * __restrict__ y,const vRealD * __restrict__ l,const vRealD *__restrict__ r) {*y = (*l) - (*r);} friend inline void sub (vRealD * __restrict__ y,const vRealD * __restrict__ l,const vRealD *__restrict__ r) {*y = (*l) - (*r);}

View File

@ -18,6 +18,10 @@ namespace Grid {
vRealF(Zero &zero){ vRealF(Zero &zero){
zeroit(*this); zeroit(*this);
} }
vRealF & operator = ( Zero & z){
vzero(*this);
return (*this);
}
//////////////////////////////////// ////////////////////////////////////
// Arithmetic operator overloads +,-,* // Arithmetic operator overloads +,-,*
//////////////////////////////////// ////////////////////////////////////

View File

@ -50,9 +50,8 @@ int main (int argc, char ** argv)
// std::cout << " Is triv Scalar<double> " <<std::has_trivial_default_constructor<iScalar<double> >::value << std::endl; // std::cout << " Is triv Scalar<double> " <<std::has_trivial_default_constructor<iScalar<double> >::value << std::endl;
// std::cout << " Is triv Scalar<vComplexD> "<<std::has_trivial_default_constructor<iScalar<vComplexD> >::value << std::endl; // std::cout << " Is triv Scalar<vComplexD> "<<std::has_trivial_default_constructor<iScalar<vComplexD> >::value << std::endl;
std::complex<float> c(1.0);
for(int a=0;a<Ns;a++){ for(int a=0;a<Ns;a++){
ident()(a,a) = c; ident()(a,a) = ComplexF(1.0);
} }
const Gamma::GammaMatrix *g = Gamma::GammaMatrices; const Gamma::GammaMatrix *g = Gamma::GammaMatrices;