tu-c0r2n72 - 0 device=0 binding=--interleave=0,1
tu-c0r2n75 - 0 device=0 binding=--interleave=0,1
tu-c0r2n93 - 0 device=0 binding=--interleave=0,1
tu-c0r2n87 - 0 device=0 binding=--interleave=0,1
tu-c0r2n90 - 0 device=0 binding=--interleave=0,1
tu-c0r2n72 - 2 device=2 binding=--interleave=4,5
tu-c0r2n87 - 1 device=1 binding=--interleave=2,3
tu-c0r2n75 - 1 device=1 binding=--interleave=2,3
tu-c0r2n72 - 1 device=1 binding=--interleave=2,3
tu-c0r2n90 - 1 device=1 binding=--interleave=2,3
tu-c0r2n78 - 0 device=0 binding=--interleave=0,1
tu-c0r2n75 - 2 device=2 binding=--interleave=4,5
tu-c0r2n78 - 1 device=1 binding=--interleave=2,3
tu-c0r2n72 - 3 device=3 binding=--interleave=6,7
tu-c0r2n93 - 2 device=2 binding=--interleave=4,5
tu-c0r2n87 - 3 device=3 binding=--interleave=6,7
tu-c0r2n87 - 2 device=2 binding=--interleave=4,5
tu-c0r2n78 - 2 device=2 binding=--interleave=4,5
tu-c0r2n90 - 2 device=2 binding=--interleave=4,5
tu-c0r2n93 - 3 device=3 binding=--interleave=6,7
tu-c0r2n93 - 1 device=1 binding=--interleave=2,3
tu-c0r2n90 - 3 device=3 binding=--interleave=6,7
tu-c0r2n75 - 3 device=3 binding=--interleave=6,7
tu-c0r2n78 - 3 device=3 binding=--interleave=6,7
tu-c0r2n84 - 0 device=0 binding=--interleave=0,1
tu-c0r2n81 - 0 device=0 binding=--interleave=0,1
tu-c0r2n81 - 2 device=2 binding=--interleave=4,5
tu-c0r2n84 - 1 device=1 binding=--interleave=2,3
tu-c0r2n81 - 1 device=1 binding=--interleave=2,3
tu-c0r2n81 - 3 device=3 binding=--interleave=6,7
tu-c0r2n84 - 3 device=3 binding=--interleave=6,7
tu-c0r2n84 - 2 device=2 binding=--interleave=4,5
OPENMPI detected
AcceleratorCudaInit: using default device 
AcceleratorCudaInit: assume user either uses
AcceleratorCudaInit: a) IBM jsrun, or 
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
AcceleratorCudaInit: Configure options --enable-setdevice=no 
OPENMPI detected
AcceleratorCudaInit: using default device 
AcceleratorCudaInit: assume user either uses
AcceleratorCudaInit: a) IBM jsrun, or 
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
AcceleratorCudaInit: Configure options --enable-setdevice=no 
OPENMPI detected
AcceleratorCudaInit[0]: ========================
AcceleratorCudaInit[0]: Device Number    : 0
AcceleratorCudaInit[0]: ========================
AcceleratorCudaInit[0]: Device identifier: NVIDIA A100-SXM4-40GB
AcceleratorCudaInit[0]:   totalGlobalMem: 42505273344 
AcceleratorCudaInit[0]:   managedMemory: 1 
AcceleratorCudaInit[0]:   isMultiGpuBoard: 0 
AcceleratorCudaInit[0]:   warpSize: 32 
AcceleratorCudaInit[0]:   pciBusID: 3 
AcceleratorCudaInit[0]:   pciDeviceID: 0 
AcceleratorCudaInit[0]: maxGridSize (2147483647,65535,65535)
AcceleratorCudaInit: using default device 
AcceleratorCudaInit: assume user either uses
AcceleratorCudaInit: a) IBM jsrun, or 
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
AcceleratorCudaInit: Configure options --enable-setdevice=no 
OPENMPI detected
AcceleratorCudaInit[0]: ========================
AcceleratorCudaInit[0]: Device Number    : 0
AcceleratorCudaInit[0]: ========================
AcceleratorCudaInit[0]: Device identifier: NVIDIA A100-SXM4-40GB
AcceleratorCudaInit[0]:   totalGlobalMem: 42505273344 
AcceleratorCudaInit[0]:   managedMemory: 1 
AcceleratorCudaInit[0]:   isMultiGpuBoard: 0 
AcceleratorCudaInit[0]:   warpSize: 32 
AcceleratorCudaInit[0]:   pciBusID: 3 
AcceleratorCudaInit[0]:   pciDeviceID: 0 
AcceleratorCudaInit[0]: maxGridSize (2147483647,65535,65535)
AcceleratorCudaInit: using default device 
AcceleratorCudaInit: assume user either uses
AcceleratorCudaInit: a) IBM jsrun, or 
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
AcceleratorCudaInit: Configure options --enable-setdevice=no 
OPENMPI detected
AcceleratorCudaInit: using default device 
AcceleratorCudaInit: assume user either uses
AcceleratorCudaInit: a) IBM jsrun, or 
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
AcceleratorCudaInit: Configure options --enable-setdevice=no 
OPENMPI detected
AcceleratorCudaInit: using default device 
AcceleratorCudaInit: assume user either uses
AcceleratorCudaInit: a) IBM jsrun, or 
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
AcceleratorCudaInit: Configure options --enable-setdevice=no 
OPENMPI detected
AcceleratorCudaInit: using default device 
AcceleratorCudaInit: assume user either uses
AcceleratorCudaInit: a) IBM jsrun, or 
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
AcceleratorCudaInit: Configure options --enable-setdevice=no 
OPENMPI detected
AcceleratorCudaInit: using default device 
AcceleratorCudaInit: assume user either uses
AcceleratorCudaInit: a) IBM jsrun, or 
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
AcceleratorCudaInit: Configure options --enable-setdevice=no 
local rank 1 device 0 bus id: 0000:44:00.0
AcceleratorCudaInit: ================================================
local rank 0 device 0 bus id: 0000:03:00.0
AcceleratorCudaInit: ================================================
local rank 0 device 0 bus id: 0000:03:00.0
AcceleratorCudaInit: ================================================
AcceleratorCudaInit: ================================================
AcceleratorCudaInit: ================================================
AcceleratorCudaInit: ================================================
AcceleratorCudaInit: ================================================
AcceleratorCudaInit: ================================================
local rank 2 device 0 bus id: 0000:84:00.0
local rank 3 device 0 bus id: 0000:C4:00.0
SharedMemoryMpi:  World communicator of size 32
SharedMemoryMpi:  Node  communicator of size 4
0SharedMemoryMpi:  SharedMemoryMPI.cc acceleratorAllocDevice 2147483648bytes at 0x14e4a0000000 for comms buffers 
Setting up IPC

__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
__|_ |  |  |  |  |  |  |  |  |  |  |  | _|__
__|_                                    _|__
__|_   GGGG    RRRR    III    DDDD      _|__
__|_  G        R   R    I     D   D     _|__
__|_  G        R   R    I     D    D    _|__
__|_  G  GG    RRRR     I     D    D    _|__
__|_  G   G    R  R     I     D   D     _|__
__|_   GGGG    R   R   III    DDDD      _|__
__|_                                    _|__
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
  |  |  |  |  |  |  |  |  |  |  |  |  |  |  


Copyright (C) 2015 Peter Boyle, Azusa Yamaguchi, Guido Cossu, Antonin Portelli and other authors

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.
Current Grid git commit hash=188d2c7a4dc77807b545f5f2813cdb589b9e44ca: (HEAD -> develop, gh/develop, gh/HEAD) uncommited changes

Grid : Message : ================================================ 
Grid : Message : MPI is initialised and logging filters activated 
Grid : Message : ================================================ 
Grid : Message : Requested 2147483648 byte stencil comms buffers 
Grid : Message : MemoryManager Cache 34004218675 bytes 
Grid : Message : MemoryManager::Init() setting up
Grid : Message : MemoryManager::Init() cache pool for recent allocations: SMALL 8 LARGE 2
Grid : Message : MemoryManager::Init() Non unified: Caching accelerator data in dedicated memory
Grid : Message : MemoryManager::Init() Using cudaMalloc
Grid : Message : 1.513628 s : Grid Layout
Grid : Message : 1.513632 s : 	Global lattice size  : 64 64 64 128 
Grid : Message : 1.513640 s : 	OpenMP threads       : 4
Grid : Message : 1.513643 s : 	MPI tasks            : 2 2 2 4 
Grid : Message : 1.553672 s : Making s innermost grids
Grid : Message : 1.605621 s : Initialising 4d RNG
Grid : Message : 1.694354 s : Intialising parallel RNG with unique string 'The 4D RNG'
Grid : Message : 1.694384 s : Seed SHA256: 49db4542db694e3b1a74bf2592a8c1b83bfebbe18401693c2609a4c3af1
Grid : Message : 2.500004 s : Initialising 5d RNG
Grid : Message : 3.891904 s : Intialising parallel RNG with unique string 'The 5D RNG'
Grid : Message : 3.891940 s : Seed SHA256: b6316f2fac44ce14111f93e0296389330b077bfd0a7b359f781c58589f8a
Grid : Message : 20.586328 s : Initialised RNGs
Grid : Message : 25.445845 s : Drawing gauge field
Grid : Message : 26.999200 s : Random gauge initialised 
Grid : Message : 26.266460 s : Setting up Cshift based reference 
Grid : Message : 54.944581 s : *****************************************************************
Grid : Message : 54.944604 s : * Kernel options --dslash-generic, --dslash-unroll, --dslash-asm
Grid : Message : 54.944606 s : *****************************************************************
Grid : Message : 54.944607 s : *****************************************************************
Grid : Message : 54.944608 s : * Benchmarking DomainWallFermionR::Dhop                  
Grid : Message : 54.944609 s : * Vectorising space-time by 8
Grid : Message : 54.944610 s : * VComplexF size is 64 B
Grid : Message : 54.944611 s : * SINGLE precision 
Grid : Message : 54.944613 s : * Using Overlapped Comms/Compute
Grid : Message : 54.944614 s : * Using GENERIC Nc WilsonKernels
Grid : Message : 54.944617 s : *****************************************************************
Grid : Message : 57.230120 s : Called warmup
Grid : Message : 337.690717 s : Called Dw 30000 times in 2.80667e+08 us
Grid : Message : 337.690768 s : mflop/s =   7.57484e+07
Grid : Message : 337.690771 s : mflop/s per rank =  2.36714e+06
Grid : Message : 337.690776 s : mflop/s per node =  9.46855e+06
Grid : Message : 337.690779 s : RF  GiB/s (base 2) =   153919
Grid : Message : 337.690781 s : mem GiB/s (base 2) =   96199.4
Grid : Message : 337.694292 s : norm diff   1.07359e-13
Grid : Message : 337.739287 s : #### Dhop calls report 
Grid : Message : 337.739296 s : WilsonFermion5D Number of DhopEO Calls   : 60002
Grid : Message : 337.739303 s : WilsonFermion5D TotalTime   /Calls        : 4681.75 us
Grid : Message : 337.739307 s : WilsonFermion5D CommTime    /Calls        : 3190.3 us
Grid : Message : 337.739310 s : WilsonFermion5D FaceTime    /Calls        : 476.226 us
Grid : Message : 337.739313 s : WilsonFermion5D ComputeTime1/Calls        : 4.79697 us
Grid : Message : 337.739315 s : WilsonFermion5D ComputeTime2/Calls        : 1028.68 us
Grid : Message : 337.739352 s : Average mflops/s per call                : 6.30091e+10
Grid : Message : 337.739356 s : Average mflops/s per call per rank       : 1.96903e+09
Grid : Message : 337.739358 s : Average mflops/s per call per node       : 7.87614e+09
Grid : Message : 337.739360 s : Average mflops/s per call (full)         : 7.70604e+07
Grid : Message : 337.739362 s : Average mflops/s per call per rank (full): 2.40814e+06
Grid : Message : 337.739366 s : Average mflops/s per call per node (full): 9.63255e+06
Grid : Message : 337.739368 s : WilsonFermion5D Stencil
Grid : Message : 337.739371 s : WilsonFermion5D StencilEven
Grid : Message : 337.739372 s : WilsonFermion5D StencilOdd
Grid : Message : 337.739375 s : WilsonFermion5D Stencil     Reporti()
Grid : Message : 337.739377 s : WilsonFermion5D StencilEven Reporti()
Grid : Message : 337.739380 s : WilsonFermion5D StencilOdd  Reporti()
Grid : Message : 393.331040 s : Compare to naive wilson implementation Dag to verify correctness
Grid : Message : 393.331340 s : Called DwDag
Grid : Message : 393.331350 s : norm dag result 12.0421
Grid : Message : 393.530000 s : norm dag ref    12.0421
Grid : Message : 393.688220 s : norm dag diff   7.28475e-14
Grid : Message : 393.117132 s : Calling Deo and Doe and //assert Deo+Doe == Dunprec
Grid : Message : 393.510943 s : src_e0.499997
Grid : Message : 393.922905 s : src_o0.500003
Grid : Message : 394.197820 s : *********************************************************
Grid : Message : 394.197880 s : * Benchmarking DomainWallFermionF::DhopEO                
Grid : Message : 394.197890 s : * Vectorising space-time by 8
Grid : Message : 394.197900 s : * SINGLE precision 
Grid : Message : 394.197910 s : * Using Overlapped Comms/Compute
Grid : Message : 394.197920 s : * Using GENERIC Nc WilsonKernels
Grid : Message : 394.197950 s : *********************************************************
Grid : Message : 531.730998 s : Deo mflop/s =   7.72224e+07
Grid : Message : 531.731032 s : Deo mflop/s per rank   2.4132e+06
Grid : Message : 531.731034 s : Deo mflop/s per node   9.6528e+06
Grid : Message : 531.731037 s : #### Dhop calls report 
Grid : Message : 531.731039 s : WilsonFermion5D Number of DhopEO Calls   : 30001
Grid : Message : 531.731041 s : WilsonFermion5D TotalTime   /Calls        : 4590.1 us
Grid : Message : 531.731043 s : WilsonFermion5D CommTime    /Calls        : 3028.81 us
Grid : Message : 531.731045 s : WilsonFermion5D FaceTime    /Calls        : 582.278 us
Grid : Message : 531.731047 s : WilsonFermion5D ComputeTime1/Calls        : 5.96767 us
Grid : Message : 531.731049 s : WilsonFermion5D ComputeTime2/Calls        : 1004.83 us
Grid : Message : 531.731070 s : Average mflops/s per call                : 5.27366e+10
Grid : Message : 531.731073 s : Average mflops/s per call per rank       : 1.64802e+09
Grid : Message : 531.731075 s : Average mflops/s per call per node       : 6.59208e+09
Grid : Message : 531.731077 s : Average mflops/s per call (full)         : 7.85991e+07
Grid : Message : 531.731084 s : Average mflops/s per call per rank (full): 2.45622e+06
Grid : Message : 531.731086 s : Average mflops/s per call per node (full): 9.82488e+06
Grid : Message : 531.731089 s : WilsonFermion5D Stencil
Grid : Message : 531.731090 s : WilsonFermion5D StencilEven
Grid : Message : 531.731092 s : WilsonFermion5D StencilOdd
Grid : Message : 531.731093 s : WilsonFermion5D Stencil     Reporti()
Grid : Message : 531.731095 s : WilsonFermion5D StencilEven Reporti()
Grid : Message : 531.731097 s : WilsonFermion5D StencilOdd  Reporti()
Grid : Message : 531.803820 s : r_e6.02113
Grid : Message : 531.811275 s : r_o6.02101
Grid : Message : 531.817646 s : res12.0421
Grid : Message : 532.496357 s : norm diff   0
Grid : Message : 533.344288 s : norm diff even  0
Grid : Message : 533.746146 s : norm diff odd   0
