mirror of
				https://github.com/paboyle/Grid.git
				synced 2025-11-04 05:54:32 +00:00 
			
		
		
		
	Summit up to date
This commit is contained in:
		@@ -1,179 +0,0 @@
 | 
			
		||||
OPENMPI detected
 | 
			
		||||
AcceleratorCudaInit[0]: ========================
 | 
			
		||||
AcceleratorCudaInit[0]: Device Number    : 0
 | 
			
		||||
AcceleratorCudaInit[0]: ========================
 | 
			
		||||
AcceleratorCudaInit[0]: Device identifier: Tesla V100-SXM2-16GB
 | 
			
		||||
AcceleratorCudaInit[0]:   totalGlobalMem: 16911433728 
 | 
			
		||||
AcceleratorCudaInit[0]:   managedMemory: 1 
 | 
			
		||||
AcceleratorCudaInit[0]:   isMultiGpuBoard: 0 
 | 
			
		||||
AcceleratorCudaInit[0]:   warpSize: 32 
 | 
			
		||||
AcceleratorCudaInit[0]:   pciBusID: 4 
 | 
			
		||||
AcceleratorCudaInit[0]:   pciDeviceID: 0 
 | 
			
		||||
AcceleratorCudaInit[0]: maxGridSize (2147483647,65535,65535)
 | 
			
		||||
AcceleratorCudaInit: rank 0 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 0 device 0 bus id: 0004:04:00.0
 | 
			
		||||
AcceleratorCudaInit: ================================================
 | 
			
		||||
SharedMemoryMpi:  World communicator of size 24
 | 
			
		||||
SharedMemoryMpi:  Node  communicator of size 6
 | 
			
		||||
0SharedMemoryMpi:  SharedMemoryMPI.cc acceleratorAllocDevice 1073741824bytes at 0x200060000000 for comms buffers 
 | 
			
		||||
Setting up IPC
 | 
			
		||||
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
__|_ |  |  |  |  |  |  |  |  |  |  |  | _|__
 | 
			
		||||
__|_                                    _|__
 | 
			
		||||
__|_   GGGG    RRRR    III    DDDD      _|__
 | 
			
		||||
__|_  G        R   R    I     D   D     _|__
 | 
			
		||||
__|_  G        R   R    I     D    D    _|__
 | 
			
		||||
__|_  G  GG    RRRR     I     D    D    _|__
 | 
			
		||||
__|_  G   G    R  R     I     D   D     _|__
 | 
			
		||||
__|_   GGGG    R   R   III    DDDD      _|__
 | 
			
		||||
__|_                                    _|__
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
  |  |  |  |  |  |  |  |  |  |  |  |  |  |  
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Copyright (C) 2015 Peter Boyle, Azusa Yamaguchi, Guido Cossu, Antonin Portelli and other authors
 | 
			
		||||
 | 
			
		||||
This program is free software; you can redistribute it and/or modify
 | 
			
		||||
it under the terms of the GNU General Public License as published by
 | 
			
		||||
the Free Software Foundation; either version 2 of the License, or
 | 
			
		||||
(at your option) any later version.
 | 
			
		||||
 | 
			
		||||
This program is distributed in the hope that it will be useful,
 | 
			
		||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
 | 
			
		||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 | 
			
		||||
GNU General Public License for more details.
 | 
			
		||||
Current Grid git commit hash=7cb1ff7395a5833ded6526c43891bd07a0436290: (HEAD -> develop, origin/develop, origin/HEAD) clean
 | 
			
		||||
 | 
			
		||||
Grid : Message : ================================================ 
 | 
			
		||||
Grid : Message : MPI is initialised and logging filters activated 
 | 
			
		||||
Grid : Message : ================================================ 
 | 
			
		||||
Grid : Message : Requested 1073741824 byte stencil comms buffers 
 | 
			
		||||
AcceleratorCudaInit: rank 1 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 1 device 1 bus id: 0004:05:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 2 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 2 device 2 bus id: 0004:06:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 5 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 5 device 5 bus id: 0035:05:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 4 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 4 device 4 bus id: 0035:04:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 3 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 3 device 3 bus id: 0035:03:00.0
 | 
			
		||||
Grid : Message : MemoryManager Cache 13529146982 bytes 
 | 
			
		||||
Grid : Message : MemoryManager::Init() setting up
 | 
			
		||||
Grid : Message : MemoryManager::Init() cache pool for recent allocations: SMALL 8 LARGE 2
 | 
			
		||||
Grid : Message : MemoryManager::Init() Non unified: Caching accelerator data in dedicated memory
 | 
			
		||||
Grid : Message : MemoryManager::Init() Using cudaMalloc
 | 
			
		||||
Grid : Message : 2.137929 s : Grid is setup to use 6 threads
 | 
			
		||||
Grid : Message : 2.137941 s : Number of iterations to average: 250
 | 
			
		||||
Grid : Message : 2.137950 s : ====================================================================================================
 | 
			
		||||
Grid : Message : 2.137958 s : = Benchmarking sequential halo exchange from host memory 
 | 
			
		||||
Grid : Message : 2.137966 s : ====================================================================================================
 | 
			
		||||
Grid : Message : 2.137974 s :  L  	 Ls  	    bytes		MB/s uni	MB/s bidi
 | 
			
		||||
AcceleratorCudaInit: rank 22 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 10 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 15 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 21 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 20 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 7 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 9 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 11 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 8 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 6 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 19 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 23 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 18 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 12 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 16 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 13 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 14 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 17 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
Grid : Message : 2.604949 s :    8	8	     393216       89973.9  		179947.8
 | 
			
		||||
Grid : Message : 2.668249 s :    8	8	     393216       18650.3  		37300.5
 | 
			
		||||
Grid : Message : 2.732288 s :    8	8	     393216       18428.5  		36857.1
 | 
			
		||||
Grid : Message : 2.753565 s :    8	8	     393216       55497.2  		110994.4
 | 
			
		||||
Grid : Message : 2.808960 s :   12	8	    1327104       100181.5  		200363.0
 | 
			
		||||
Grid : Message : 3.226900 s :   12	8	    1327104       20600.5  		41201.0
 | 
			
		||||
Grid : Message : 3.167459 s :   12	8	    1327104       24104.6  		48209.2
 | 
			
		||||
Grid : Message : 3.227660 s :   12	8	    1327104       66156.7  		132313.5
 | 
			
		||||
Grid : Message : 3.413570 s :   16	8	    3145728       56174.4  		112348.8
 | 
			
		||||
Grid : Message : 3.802697 s :   16	8	    3145728       24255.9  		48511.7
 | 
			
		||||
Grid : Message : 4.190498 s :   16	8	    3145728       24336.7  		48673.4
 | 
			
		||||
Grid : Message : 4.385171 s :   16	8	    3145728       48484.1  		96968.2
 | 
			
		||||
Grid : Message : 4.805284 s :   20	8	    6144000       46380.5  		92761.1
 | 
			
		||||
Grid : Message : 5.562975 s :   20	8	    6144000       24328.5  		48656.9
 | 
			
		||||
Grid : Message : 6.322562 s :   20	8	    6144000       24266.7  		48533.4
 | 
			
		||||
Grid : Message : 6.773598 s :   20	8	    6144000       40868.5  		81736.9
 | 
			
		||||
Grid : Message : 7.600999 s :   24	8	   10616832       40198.3  		80396.6
 | 
			
		||||
Grid : Message : 8.912917 s :   24	8	   10616832       24279.5  		48559.1
 | 
			
		||||
Grid : Message : 10.220961 s :   24	8	   10616832       24350.2  		48700.4
 | 
			
		||||
Grid : Message : 11.728250 s :   24	8	   10616832       37390.9  		74781.8
 | 
			
		||||
Grid : Message : 12.497258 s :   28	8	   16859136       36792.2  		73584.5
 | 
			
		||||
Grid : Message : 14.585387 s :   28	8	   16859136       24222.2  		48444.3
 | 
			
		||||
Grid : Message : 16.664783 s :   28	8	   16859136       24323.4  		48646.8
 | 
			
		||||
Grid : Message : 17.955238 s :   28	8	   16859136       39194.7  		78389.4
 | 
			
		||||
Grid : Message : 20.136479 s :   32	8	   25165824       35718.3  		71436.5
 | 
			
		||||
Grid : Message : 23.241958 s :   32	8	   25165824       24311.4  		48622.9
 | 
			
		||||
Grid : Message : 26.344810 s :   32	8	   25165824       24331.9  		48663.7
 | 
			
		||||
Grid : Message : 28.384420 s :   32	8	   25165824       37016.3  		74032.7
 | 
			
		||||
Grid : Message : 28.388879 s : ====================================================================================================
 | 
			
		||||
Grid : Message : 28.388894 s : = Benchmarking sequential halo exchange from GPU memory 
 | 
			
		||||
Grid : Message : 28.388909 s : ====================================================================================================
 | 
			
		||||
Grid : Message : 28.388924 s :  L  	 Ls  	    bytes		MB/s uni	MB/s bidi
 | 
			
		||||
Grid : Message : 28.553993 s :    8	8	     393216       8272.4  		16544.7
 | 
			
		||||
Grid : Message : 28.679592 s :    8	8	     393216       9395.4  		18790.8
 | 
			
		||||
Grid : Message : 28.811112 s :    8	8	     393216       8971.0  		17942.0
 | 
			
		||||
Grid : Message : 28.843770 s :    8	8	     393216       36145.6  		72291.2
 | 
			
		||||
Grid : Message : 28.981754 s :   12	8	    1327104       49591.6  		99183.2
 | 
			
		||||
Grid : Message : 29.299764 s :   12	8	    1327104       12520.8  		25041.7
 | 
			
		||||
Grid : Message : 29.620288 s :   12	8	    1327104       12422.2  		24844.4
 | 
			
		||||
Grid : Message : 29.657645 s :   12	8	    1327104       106637.5  		213275.1
 | 
			
		||||
Grid : Message : 29.952933 s :   16	8	    3145728       43939.2  		87878.5
 | 
			
		||||
Grid : Message : 30.585411 s :   16	8	    3145728       14922.1  		29844.2
 | 
			
		||||
Grid : Message : 31.219781 s :   16	8	    3145728       14877.2  		29754.4
 | 
			
		||||
Grid : Message : 31.285017 s :   16	8	    3145728       144724.3  		289448.7
 | 
			
		||||
Grid : Message : 31.706443 s :   20	8	    6144000       54676.2  		109352.4
 | 
			
		||||
Grid : Message : 32.739205 s :   20	8	    6144000       17848.0  		35696.1
 | 
			
		||||
Grid : Message : 33.771852 s :   20	8	    6144000       17849.9  		35699.7
 | 
			
		||||
Grid : Message : 33.871981 s :   20	8	    6144000       184141.4  		368282.8
 | 
			
		||||
Grid : Message : 34.536808 s :   24	8	   10616832       55784.3  		111568.6
 | 
			
		||||
Grid : Message : 36.275648 s :   24	8	   10616832       18317.6  		36635.3
 | 
			
		||||
Grid : Message : 37.997181 s :   24	8	   10616832       18501.7  		37003.4
 | 
			
		||||
Grid : Message : 38.140442 s :   24	8	   10616832       222383.9  		444767.9
 | 
			
		||||
Grid : Message : 39.177222 s :   28	8	   16859136       56609.7  		113219.4
 | 
			
		||||
Grid : Message : 41.874755 s :   28	8	   16859136       18749.9  		37499.8
 | 
			
		||||
Grid : Message : 44.529381 s :   28	8	   16859136       19052.9  		38105.8
 | 
			
		||||
Grid : Message : 44.742192 s :   28	8	   16859136       237717.1  		475434.2
 | 
			
		||||
Grid : Message : 46.184000 s :   32	8	   25165824       57091.2  		114182.4
 | 
			
		||||
Grid : Message : 50.734740 s :   32	8	   25165824       19411.0  		38821.9
 | 
			
		||||
Grid : Message : 53.931228 s :   32	8	   25165824       19570.6  		39141.2
 | 
			
		||||
Grid : Message : 54.238467 s :   32	8	   25165824       245765.6  		491531.2
 | 
			
		||||
Grid : Message : 54.268664 s : ====================================================================================================
 | 
			
		||||
Grid : Message : 54.268680 s : = All done; Bye Bye
 | 
			
		||||
Grid : Message : 54.268691 s : ====================================================================================================
 | 
			
		||||
@@ -3,7 +3,7 @@
 | 
			
		||||
	      --enable-gen-simd-width=32 \
 | 
			
		||||
	      --enable-unified=no \
 | 
			
		||||
	       --enable-shm=no \
 | 
			
		||||
	       --disable-gparity \
 | 
			
		||||
	       --enable-gparity \
 | 
			
		||||
	       --disable-setdevice \
 | 
			
		||||
	       --disable-fermion-reps \
 | 
			
		||||
	       --enable-accelerator=cuda \
 | 
			
		||||
 
 | 
			
		||||
@@ -10,19 +10,16 @@ AcceleratorCudaInit[0]:   warpSize: 32
 | 
			
		||||
AcceleratorCudaInit[0]:   pciBusID: 4 
 | 
			
		||||
AcceleratorCudaInit[0]:   pciDeviceID: 0 
 | 
			
		||||
AcceleratorCudaInit[0]: maxGridSize (2147483647,65535,65535)
 | 
			
		||||
AcceleratorCudaInit: rank 0 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: using default device 
 | 
			
		||||
AcceleratorCudaInit: assume user either uses
 | 
			
		||||
AcceleratorCudaInit: a) IBM jsrun, or 
 | 
			
		||||
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=no 
 | 
			
		||||
local rank 0 device 0 bus id: 0004:04:00.0
 | 
			
		||||
AcceleratorCudaInit: ================================================
 | 
			
		||||
SharedMemoryMpi:  World communicator of size 24
 | 
			
		||||
SharedMemoryMpi:  Node  communicator of size 6
 | 
			
		||||
0SharedMemoryMpi:  SharedMemoryMPI.cc acceleratorAllocDevice 2147483648bytes at 0x200080000000 for comms buffers 
 | 
			
		||||
AcceleratorCudaInit: rank 3 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 3 device 3 bus id: 0035:03:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 5 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 5 device 5 bus id: 0035:05:00.0
 | 
			
		||||
SharedMemoryMpi:  Node  communicator of size 1
 | 
			
		||||
0SharedMemoryMpi:  SharedMemoryMPI.cc acceleratorAllocDevice 1073741824bytes at 0x200080000000 - 2000bfffffff for comms buffers 
 | 
			
		||||
Setting up IPC
 | 
			
		||||
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
@@ -36,6 +33,11 @@ __|_  G  GG    RRRR     I     D    D    _|__
 | 
			
		||||
__|_  G   G    R  R     I     D   D     _|__
 | 
			
		||||
__|_   GGGG    R   R   III    DDDD      _|__
 | 
			
		||||
__|_                                    _|__
 | 
			
		||||
local rank 5 device 0 bus id: 0035:05:00.0
 | 
			
		||||
local rank 1 device 0 bus id: 0004:05:00.0
 | 
			
		||||
local rank 2 device 0 bus id: 0004:06:00.0
 | 
			
		||||
local rank 3 device 0 bus id: 0035:03:00.0
 | 
			
		||||
local rank 4 device 0 bus id: 0035:04:00.0
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
  |  |  |  |  |  |  |  |  |  |  |  |  |  |  
 | 
			
		||||
@@ -45,15 +47,6 @@ Copyright (C) 2015 Peter Boyle, Azusa Yamaguchi, Guido Cossu, Antonin Portelli a
 | 
			
		||||
 | 
			
		||||
This program is free software; you can redistribute it and/or modify
 | 
			
		||||
it under the terms of the GNU General Public License as published by
 | 
			
		||||
AcceleratorCudaInit: rank 4 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 4 device 4 bus id: 0035:04:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 1 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 1 device 1 bus id: 0004:05:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 2 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 2 device 2 bus id: 0004:06:00.0
 | 
			
		||||
the Free Software Foundation; either version 2 of the License, or
 | 
			
		||||
(at your option) any later version.
 | 
			
		||||
 | 
			
		||||
@@ -61,146 +54,63 @@ This program is distributed in the hope that it will be useful,
 | 
			
		||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
 | 
			
		||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 | 
			
		||||
GNU General Public License for more details.
 | 
			
		||||
Current Grid git commit hash=7cb1ff7395a5833ded6526c43891bd07a0436290: (HEAD -> develop, origin/develop, origin/HEAD) clean
 | 
			
		||||
Current Grid git commit hash=1713de35c0dc339564661dd7df8a72583f889e91: (HEAD -> feature/dirichlet) uncommited changes
 | 
			
		||||
 | 
			
		||||
Grid : Message : ================================================ 
 | 
			
		||||
Grid : Message : MPI is initialised and logging filters activated 
 | 
			
		||||
Grid : Message : ================================================ 
 | 
			
		||||
Grid : Message : Requested 2147483648 byte stencil comms buffers 
 | 
			
		||||
Grid : Message : MemoryManager Cache 8388608000 bytes 
 | 
			
		||||
Grid : Message : Requested 1073741824 byte stencil comms buffers 
 | 
			
		||||
Grid : Message : MemoryManager Cache 4194304000 bytes 
 | 
			
		||||
Grid : Message : MemoryManager::Init() setting up
 | 
			
		||||
Grid : Message : MemoryManager::Init() cache pool for recent allocations: SMALL 8 LARGE 2
 | 
			
		||||
Grid : Message : MemoryManager::Init() Non unified: Caching accelerator data in dedicated memory
 | 
			
		||||
Grid : Message : MemoryManager::Init() Using cudaMalloc
 | 
			
		||||
Grid : Message : 1.731905 s : Grid Layout
 | 
			
		||||
Grid : Message : 1.731915 s : 	Global lattice size  : 48 48 48 72 
 | 
			
		||||
Grid : Message : 1.731928 s : 	OpenMP threads       : 6
 | 
			
		||||
Grid : Message : 1.731938 s : 	MPI tasks            : 2 2 2 3 
 | 
			
		||||
AcceleratorCudaInit: rank 9 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 23 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 22 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 21 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 18 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 6 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 7 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 10 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 8 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 11 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 20 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 19 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 13 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 12 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 14 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 16 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 15 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 17 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
Grid : Message : 2.683494 s : Making s innermost grids
 | 
			
		||||
Grid : Message : 2.780034 s : Initialising 4d RNG
 | 
			
		||||
Grid : Message : 2.833099 s : Intialising parallel RNG with unique string 'The 4D RNG'
 | 
			
		||||
Grid : Message : 2.833121 s : Seed SHA256: 49db4542db694e3b1a74bf2592a8c1b83bfebbe18401693c2609a4c3af1
 | 
			
		||||
Grid : Message : 2.916841 s : Initialising 5d RNG
 | 
			
		||||
Grid : Message : 3.762880 s : Intialising parallel RNG with unique string 'The 5D RNG'
 | 
			
		||||
Grid : Message : 3.762902 s : Seed SHA256: b6316f2fac44ce14111f93e0296389330b077bfd0a7b359f781c58589f8a
 | 
			
		||||
Grid : Message : 5.264345 s : Initialised RNGs
 | 
			
		||||
Grid : Message : 6.489904 s : Drawing gauge field
 | 
			
		||||
Grid : Message : 6.729262 s : Random gauge initialised 
 | 
			
		||||
Grid : Message : 7.781273 s : Setting up Cshift based reference 
 | 
			
		||||
Grid : Message : 8.725313 s : *****************************************************************
 | 
			
		||||
Grid : Message : 8.725332 s : * Kernel options --dslash-generic, --dslash-unroll, --dslash-asm
 | 
			
		||||
Grid : Message : 8.725342 s : *****************************************************************
 | 
			
		||||
Grid : Message : 8.725352 s : *****************************************************************
 | 
			
		||||
Grid : Message : 8.725362 s : * Benchmarking DomainWallFermionR::Dhop                  
 | 
			
		||||
Grid : Message : 8.725372 s : * Vectorising space-time by 4
 | 
			
		||||
Grid : Message : 8.725383 s : * VComplexF size is 32 B
 | 
			
		||||
Grid : Message : 8.725395 s : * SINGLE precision 
 | 
			
		||||
Grid : Message : 8.725405 s : * Using Overlapped Comms/Compute
 | 
			
		||||
Grid : Message : 8.725415 s : * Using GENERIC Nc WilsonKernels
 | 
			
		||||
Grid : Message : 8.725425 s : *****************************************************************
 | 
			
		||||
Grid : Message : 9.465229 s : Called warmup
 | 
			
		||||
Grid : Message : 58.646066 s : Called Dw 3000 times in 4.91764e+07 us
 | 
			
		||||
Grid : Message : 58.646121 s : mflop/s =   1.02592e+07
 | 
			
		||||
Grid : Message : 58.646134 s : mflop/s per rank =  427468
 | 
			
		||||
Grid : Message : 58.646145 s : mflop/s per node =  2.56481e+06
 | 
			
		||||
Grid : Message : 58.646156 s : RF  GiB/s (base 2) =   20846.5
 | 
			
		||||
Grid : Message : 58.646166 s : mem GiB/s (base 2) =   13029.1
 | 
			
		||||
Grid : Message : 58.648008 s : norm diff   1.04778e-13
 | 
			
		||||
Grid : Message : 58.734885 s : #### Dhop calls report 
 | 
			
		||||
Grid : Message : 58.734897 s : WilsonFermion5D Number of DhopEO Calls   : 6002
 | 
			
		||||
Grid : Message : 58.734909 s : WilsonFermion5D TotalTime   /Calls        : 8217.71 us
 | 
			
		||||
Grid : Message : 58.734922 s : WilsonFermion5D CommTime    /Calls        : 7109.5 us
 | 
			
		||||
Grid : Message : 58.734933 s : WilsonFermion5D FaceTime    /Calls        : 446.623 us
 | 
			
		||||
Grid : Message : 58.734943 s : WilsonFermion5D ComputeTime1/Calls        : 18.0558 us
 | 
			
		||||
Grid : Message : 58.734953 s : WilsonFermion5D ComputeTime2/Calls        : 731.097 us
 | 
			
		||||
Grid : Message : 58.734979 s : Average mflops/s per call                : 4.8157e+09
 | 
			
		||||
Grid : Message : 58.734989 s : Average mflops/s per call per rank       : 2.00654e+08
 | 
			
		||||
Grid : Message : 58.734999 s : Average mflops/s per call per node       : 1.20393e+09
 | 
			
		||||
Grid : Message : 58.735008 s : Average mflops/s per call (full)         : 1.04183e+07
 | 
			
		||||
Grid : Message : 58.735017 s : Average mflops/s per call per rank (full): 434094
 | 
			
		||||
Grid : Message : 58.735026 s : Average mflops/s per call per node (full): 2.60456e+06
 | 
			
		||||
Grid : Message : 58.735035 s : WilsonFermion5D Stencil
 | 
			
		||||
Grid : Message : 58.735043 s : WilsonFermion5D StencilEven
 | 
			
		||||
Grid : Message : 58.735051 s : WilsonFermion5D StencilOdd
 | 
			
		||||
Grid : Message : 58.735059 s : WilsonFermion5D Stencil     Reporti()
 | 
			
		||||
Grid : Message : 58.735067 s : WilsonFermion5D StencilEven Reporti()
 | 
			
		||||
Grid : Message : 58.735075 s : WilsonFermion5D StencilOdd  Reporti()
 | 
			
		||||
Grid : Message : 64.934380 s : Compare to naive wilson implementation Dag to verify correctness
 | 
			
		||||
Grid : Message : 64.934740 s : Called DwDag
 | 
			
		||||
Grid : Message : 64.934870 s : norm dag result 12.0422
 | 
			
		||||
Grid : Message : 64.120756 s : norm dag ref    12.0422
 | 
			
		||||
Grid : Message : 64.149389 s : norm dag diff   7.6644e-14
 | 
			
		||||
Grid : Message : 64.317786 s : Calling Deo and Doe and //assert Deo+Doe == Dunprec
 | 
			
		||||
Grid : Message : 64.465331 s : src_e0.499995
 | 
			
		||||
Grid : Message : 64.524653 s : src_o0.500005
 | 
			
		||||
Grid : Message : 64.558706 s : *********************************************************
 | 
			
		||||
Grid : Message : 64.558717 s : * Benchmarking DomainWallFermionF::DhopEO                
 | 
			
		||||
Grid : Message : 64.558727 s : * Vectorising space-time by 4
 | 
			
		||||
Grid : Message : 64.558737 s : * SINGLE precision 
 | 
			
		||||
Grid : Message : 64.558745 s : * Using Overlapped Comms/Compute
 | 
			
		||||
Grid : Message : 64.558753 s : * Using GENERIC Nc WilsonKernels
 | 
			
		||||
Grid : Message : 64.558761 s : *********************************************************
 | 
			
		||||
Grid : Message : 92.702145 s : Deo mflop/s =   8.97692e+06
 | 
			
		||||
Grid : Message : 92.702185 s : Deo mflop/s per rank   374038
 | 
			
		||||
Grid : Message : 92.702198 s : Deo mflop/s per node   2.24423e+06
 | 
			
		||||
Grid : Message : 92.702209 s : #### Dhop calls report 
 | 
			
		||||
Grid : Message : 92.702223 s : WilsonFermion5D Number of DhopEO Calls   : 3001
 | 
			
		||||
Grid : Message : 92.702240 s : WilsonFermion5D TotalTime   /Calls        : 9377.88 us
 | 
			
		||||
Grid : Message : 92.702257 s : WilsonFermion5D CommTime    /Calls        : 8221.84 us
 | 
			
		||||
Grid : Message : 92.702277 s : WilsonFermion5D FaceTime    /Calls        : 543.548 us
 | 
			
		||||
Grid : Message : 92.702301 s : WilsonFermion5D ComputeTime1/Calls        : 20.936 us
 | 
			
		||||
Grid : Message : 92.702322 s : WilsonFermion5D ComputeTime2/Calls        : 732.33 us
 | 
			
		||||
Grid : Message : 92.702376 s : Average mflops/s per call                : 4.13001e+09
 | 
			
		||||
Grid : Message : 92.702387 s : Average mflops/s per call per rank       : 1.72084e+08
 | 
			
		||||
Grid : Message : 92.702397 s : Average mflops/s per call per node       : 1.0325e+09
 | 
			
		||||
Grid : Message : 92.702407 s : Average mflops/s per call (full)         : 9.12937e+06
 | 
			
		||||
Grid : Message : 92.702416 s : Average mflops/s per call per rank (full): 380391
 | 
			
		||||
Grid : Message : 92.702426 s : Average mflops/s per call per node (full): 2.28234e+06
 | 
			
		||||
Grid : Message : 92.702435 s : WilsonFermion5D Stencil
 | 
			
		||||
Grid : Message : 92.702443 s : WilsonFermion5D StencilEven
 | 
			
		||||
Grid : Message : 92.702451 s : WilsonFermion5D StencilOdd
 | 
			
		||||
Grid : Message : 92.702459 s : WilsonFermion5D Stencil     Reporti()
 | 
			
		||||
Grid : Message : 92.702467 s : WilsonFermion5D StencilEven Reporti()
 | 
			
		||||
Grid : Message : 92.702475 s : WilsonFermion5D StencilOdd  Reporti()
 | 
			
		||||
Grid : Message : 92.772983 s : r_e6.02121
 | 
			
		||||
Grid : Message : 92.786384 s : r_o6.02102
 | 
			
		||||
Grid : Message : 92.799622 s : res12.0422
 | 
			
		||||
Grid : Message : 93.860500 s : norm diff   0
 | 
			
		||||
Grid : Message : 93.162026 s : norm diff even  0
 | 
			
		||||
Grid : Message : 93.197529 s : norm diff odd   0
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Grid : Message : 0.179000 s : ++++++++++++++++++++++++++++++++++++++++++++++++
 | 
			
		||||
Grid : Message : 0.196000 s :  Testing with full communication 
 | 
			
		||||
Grid : Message : 0.211000 s : ++++++++++++++++++++++++++++++++++++++++++++++++
 | 
			
		||||
Grid : Message : 0.225000 s : Grid Layout
 | 
			
		||||
Grid : Message : 0.233000 s : 	Global lattice size  : 48 48 48 72 
 | 
			
		||||
Grid : Message : 0.246000 s : 	OpenMP threads       : 6
 | 
			
		||||
Grid : Message : 0.255000 s : 	MPI tasks            : 2 2 2 3 
 | 
			
		||||
Grid : Message : 0.182200 s : Initialising 4d RNG
 | 
			
		||||
Grid : Message : 0.233863 s : Intialising parallel RNG with unique string 'The 4D RNG'
 | 
			
		||||
Grid : Message : 0.233886 s : Seed SHA256: 49db4542db694e3b1a74bf2592a8c1b83bfebbe18401693c2609a4c3af1
 | 
			
		||||
Grid : Message : 0.245805 s : Initialising 5d RNG
 | 
			
		||||
Grid : Message : 1.710720 s : Intialising parallel RNG with unique string 'The 5D RNG'
 | 
			
		||||
Grid : Message : 1.710950 s : Seed SHA256: b6316f2fac44ce14111f93e0296389330b077bfd0a7b359f781c58589f8a
 | 
			
		||||
Grid : Message : 2.220272 s : Drawing gauge field
 | 
			
		||||
Grid : Message : 2.418119 s : Random gauge initialised 
 | 
			
		||||
Grid : Message : 2.418142 s : Applying BCs for Dirichlet Block5 [0 0 0 0 0]
 | 
			
		||||
Grid : Message : 2.418156 s : Applying BCs for Dirichlet Block4 [0 0 0 0]
 | 
			
		||||
Grid : Message : 2.489588 s : Setting up Cshift based reference 
 | 
			
		||||
Grid : Message : 13.921239 s : *****************************************************************
 | 
			
		||||
Grid : Message : 13.921261 s : * Kernel options --dslash-generic, --dslash-unroll, --dslash-asm
 | 
			
		||||
Grid : Message : 13.921270 s : *****************************************************************
 | 
			
		||||
Grid : Message : 13.921279 s : *****************************************************************
 | 
			
		||||
Grid : Message : 13.921288 s : * Benchmarking DomainWallFermionR::Dhop                  
 | 
			
		||||
Grid : Message : 13.921296 s : * Vectorising space-time by 4
 | 
			
		||||
Grid : Message : 13.921305 s : * VComplexF size is 32 B
 | 
			
		||||
Grid : Message : 13.921314 s : * SINGLE precision 
 | 
			
		||||
Grid : Message : 13.921321 s : * Using Overlapped Comms/Compute
 | 
			
		||||
Grid : Message : 13.921328 s : * Using GENERIC Nc WilsonKernels
 | 
			
		||||
Grid : Message : 13.921335 s : *****************************************************************
 | 
			
		||||
Grid : Message : 14.821339 s : Called warmup
 | 
			
		||||
Grid : Message : 23.975467 s : Called Dw 300 times in 9.15155e+06 us
 | 
			
		||||
Grid : Message : 23.975528 s : mflop/s =   5.51286e+06
 | 
			
		||||
Grid : Message : 23.975543 s : mflop/s per rank =  229702
 | 
			
		||||
Grid : Message : 23.975557 s : mflop/s per node =  229702
 | 
			
		||||
Grid : Message : 23.989684 s : norm diff   5.09279e-313  Line 291
 | 
			
		||||
Grid : Message : 39.450493 s : ----------------------------------------------------------------
 | 
			
		||||
Grid : Message : 39.450517 s : Compare to naive wilson implementation Dag to verify correctness
 | 
			
		||||
Grid : Message : 39.450526 s : ----------------------------------------------------------------
 | 
			
		||||
Grid : Message : 39.450534 s : Called DwDag
 | 
			
		||||
Grid : Message : 39.450542 s : norm dag result nan
 | 
			
		||||
Grid : Message : 39.451564 s : norm dag ref    nan
 | 
			
		||||
Grid : Message : 39.455714 s : norm dag diff   nan  Line 354
 | 
			
		||||
 
 | 
			
		||||
@@ -10,14 +10,21 @@ AcceleratorCudaInit[0]:   warpSize: 32
 | 
			
		||||
AcceleratorCudaInit[0]:   pciBusID: 4 
 | 
			
		||||
AcceleratorCudaInit[0]:   pciDeviceID: 0 
 | 
			
		||||
AcceleratorCudaInit[0]: maxGridSize (2147483647,65535,65535)
 | 
			
		||||
AcceleratorCudaInit: rank 0 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: using default device 
 | 
			
		||||
AcceleratorCudaInit: assume user either uses
 | 
			
		||||
AcceleratorCudaInit: a) IBM jsrun, or 
 | 
			
		||||
AcceleratorCudaInit: b) invokes through a wrapping script to set CUDA_VISIBLE_DEVICES, UCX_NET_DEVICES, and numa binding 
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=no 
 | 
			
		||||
local rank 0 device 0 bus id: 0004:04:00.0
 | 
			
		||||
AcceleratorCudaInit: ================================================
 | 
			
		||||
SharedMemoryMpi:  World communicator of size 24
 | 
			
		||||
SharedMemoryMpi:  Node  communicator of size 6
 | 
			
		||||
0SharedMemoryMpi:  SharedMemoryMPI.cc acceleratorAllocDevice 2147483648bytes at 0x200080000000 for comms buffers 
 | 
			
		||||
SharedMemoryMpi:  Node  communicator of size 1
 | 
			
		||||
local rank 3 device 0 bus id: 0004:04:00.0
 | 
			
		||||
local rank 2 device 0 bus id: 0004:04:00.0
 | 
			
		||||
local rank 1 device 0 bus id: 0004:04:00.0
 | 
			
		||||
0SharedMemoryMpi:  SharedMemoryMPI.cc acceleratorAllocDevice 1073741824bytes at 0x200080000000 - 2000bfffffff for comms buffers 
 | 
			
		||||
Setting up IPC
 | 
			
		||||
local rank 5 device 0 bus id: 0004:04:00.0
 | 
			
		||||
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
__|__|__|__|__|__|__|__|__|__|__|__|__|__|__
 | 
			
		||||
@@ -39,168 +46,46 @@ Copyright (C) 2015 Peter Boyle, Azusa Yamaguchi, Guido Cossu, Antonin Portelli a
 | 
			
		||||
 | 
			
		||||
This program is free software; you can redistribute it and/or modify
 | 
			
		||||
it under the terms of the GNU General Public License as published by
 | 
			
		||||
local rank 4 device 0 bus id: 0004:04:00.0
 | 
			
		||||
the Free Software Foundation; either version 2 of the License, or
 | 
			
		||||
(at your option) any later version.
 | 
			
		||||
 | 
			
		||||
This program is distributed in the hope that it will be useful,
 | 
			
		||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
 | 
			
		||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 | 
			
		||||
AcceleratorCudaInit: rank 2 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 2 device 2 bus id: 0004:06:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 1 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 1 device 1 bus id: 0004:05:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 4 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 4 device 4 bus id: 0035:04:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 3 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 3 device 3 bus id: 0035:03:00.0
 | 
			
		||||
AcceleratorCudaInit: rank 5 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
local rank 5 device 5 bus id: 0035:05:00.0
 | 
			
		||||
GNU General Public License for more details.
 | 
			
		||||
Current Grid git commit hash=7cb1ff7395a5833ded6526c43891bd07a0436290: (HEAD -> develop, origin/develop, origin/HEAD) clean
 | 
			
		||||
Current Grid git commit hash=1713de35c0dc339564661dd7df8a72583f889e91: (HEAD -> feature/dirichlet) uncommited changes
 | 
			
		||||
 | 
			
		||||
Grid : Message : ================================================ 
 | 
			
		||||
Grid : Message : MPI is initialised and logging filters activated 
 | 
			
		||||
Grid : Message : ================================================ 
 | 
			
		||||
Grid : Message : Requested 2147483648 byte stencil comms buffers 
 | 
			
		||||
Grid : Message : MemoryManager Cache 8388608000 bytes 
 | 
			
		||||
Grid : Message : Requested 1073741824 byte stencil comms buffers 
 | 
			
		||||
Grid : Message : MemoryManager::Init() setting up
 | 
			
		||||
Grid : Message : MemoryManager::Init() cache pool for recent allocations: SMALL 8 LARGE 2
 | 
			
		||||
Grid : Message : MemoryManager::Init() Non unified: Caching accelerator data in dedicated memory
 | 
			
		||||
Grid : Message : MemoryManager::Init() Using cudaMalloc
 | 
			
		||||
Grid : Message : 1.544984 s : Grid Layout
 | 
			
		||||
Grid : Message : 1.544992 s : 	Global lattice size  : 64 64 64 96 
 | 
			
		||||
Grid : Message : 1.545003 s : 	OpenMP threads       : 6
 | 
			
		||||
Grid : Message : 1.545011 s : 	MPI tasks            : 2 2 2 3 
 | 
			
		||||
AcceleratorCudaInit: rank 8 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 6 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 11 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 16 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 17 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 13 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 12 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 21 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 23 setting device to node rank 5
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 22 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 19 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 18 setting device to node rank 0
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 7 setting device to node rank 1
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 10 setting device to node rank 4
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 9 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 14 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 15 setting device to node rank 3
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
AcceleratorCudaInit: rank 20 setting device to node rank 2
 | 
			
		||||
AcceleratorCudaInit: Configure options --enable-setdevice=yes 
 | 
			
		||||
Grid : Message : 2.994920 s : Making s innermost grids
 | 
			
		||||
Grid : Message : 2.232502 s : Initialising 4d RNG
 | 
			
		||||
Grid : Message : 2.397047 s : Intialising parallel RNG with unique string 'The 4D RNG'
 | 
			
		||||
Grid : Message : 2.397069 s : Seed SHA256: 49db4542db694e3b1a74bf2592a8c1b83bfebbe18401693c2609a4c3af1
 | 
			
		||||
Grid : Message : 2.653140 s : Initialising 5d RNG
 | 
			
		||||
Grid : Message : 5.285347 s : Intialising parallel RNG with unique string 'The 5D RNG'
 | 
			
		||||
Grid : Message : 5.285369 s : Seed SHA256: b6316f2fac44ce14111f93e0296389330b077bfd0a7b359f781c58589f8a
 | 
			
		||||
Grid : Message : 9.994738 s : Initialised RNGs
 | 
			
		||||
Grid : Message : 13.153426 s : Drawing gauge field
 | 
			
		||||
Grid : Message : 13.825697 s : Random gauge initialised 
 | 
			
		||||
Grid : Message : 18.537657 s : Setting up Cshift based reference 
 | 
			
		||||
Grid : Message : 22.296755 s : *****************************************************************
 | 
			
		||||
Grid : Message : 22.296781 s : * Kernel options --dslash-generic, --dslash-unroll, --dslash-asm
 | 
			
		||||
Grid : Message : 22.296791 s : *****************************************************************
 | 
			
		||||
Grid : Message : 22.296800 s : *****************************************************************
 | 
			
		||||
Grid : Message : 22.296809 s : * Benchmarking DomainWallFermionR::Dhop                  
 | 
			
		||||
Grid : Message : 22.296818 s : * Vectorising space-time by 4
 | 
			
		||||
Grid : Message : 22.296828 s : * VComplexF size is 32 B
 | 
			
		||||
Grid : Message : 22.296838 s : * SINGLE precision 
 | 
			
		||||
Grid : Message : 22.296847 s : * Using Overlapped Comms/Compute
 | 
			
		||||
Grid : Message : 22.296855 s : * Using GENERIC Nc WilsonKernels
 | 
			
		||||
Grid : Message : 22.296863 s : *****************************************************************
 | 
			
		||||
Grid : Message : 24.746452 s : Called warmup
 | 
			
		||||
Grid : Message : 137.525756 s : Called Dw 3000 times in 1.12779e+08 us
 | 
			
		||||
Grid : Message : 137.525818 s : mflop/s =   1.41383e+07
 | 
			
		||||
Grid : Message : 137.525831 s : mflop/s per rank =  589097
 | 
			
		||||
Grid : Message : 137.525843 s : mflop/s per node =  3.53458e+06
 | 
			
		||||
Grid : Message : 137.525854 s : RF  GiB/s (base 2) =   28728.7
 | 
			
		||||
Grid : Message : 137.525864 s : mem GiB/s (base 2) =   17955.5
 | 
			
		||||
Grid : Message : 137.693645 s : norm diff   1.04885e-13
 | 
			
		||||
Grid : Message : 137.965585 s : #### Dhop calls report 
 | 
			
		||||
Grid : Message : 137.965598 s : WilsonFermion5D Number of DhopEO Calls   : 6002
 | 
			
		||||
Grid : Message : 137.965612 s : WilsonFermion5D TotalTime   /Calls        : 18899.7 us
 | 
			
		||||
Grid : Message : 137.965624 s : WilsonFermion5D CommTime    /Calls        : 16041.4 us
 | 
			
		||||
Grid : Message : 137.965634 s : WilsonFermion5D FaceTime    /Calls        : 859.705 us
 | 
			
		||||
Grid : Message : 137.965644 s : WilsonFermion5D ComputeTime1/Calls        : 70.5881 us
 | 
			
		||||
Grid : Message : 137.965654 s : WilsonFermion5D ComputeTime2/Calls        : 2094.8 us
 | 
			
		||||
Grid : Message : 137.965682 s : Average mflops/s per call                : 3.87638e+09
 | 
			
		||||
Grid : Message : 137.965692 s : Average mflops/s per call per rank       : 1.61516e+08
 | 
			
		||||
Grid : Message : 137.965702 s : Average mflops/s per call per node       : 9.69095e+08
 | 
			
		||||
Grid : Message : 137.965712 s : Average mflops/s per call (full)         : 1.43168e+07
 | 
			
		||||
Grid : Message : 137.965721 s : Average mflops/s per call per rank (full): 596533
 | 
			
		||||
Grid : Message : 137.965730 s : Average mflops/s per call per node (full): 3.5792e+06
 | 
			
		||||
Grid : Message : 137.965740 s : WilsonFermion5D Stencil
 | 
			
		||||
Grid : Message : 137.965748 s : WilsonFermion5D StencilEven
 | 
			
		||||
Grid : Message : 137.965756 s : WilsonFermion5D StencilOdd
 | 
			
		||||
Grid : Message : 137.965764 s : WilsonFermion5D Stencil     Reporti()
 | 
			
		||||
Grid : Message : 137.965772 s : WilsonFermion5D StencilEven Reporti()
 | 
			
		||||
Grid : Message : 137.965780 s : WilsonFermion5D StencilOdd  Reporti()
 | 
			
		||||
Grid : Message : 156.554605 s : Compare to naive wilson implementation Dag to verify correctness
 | 
			
		||||
Grid : Message : 156.554632 s : Called DwDag
 | 
			
		||||
Grid : Message : 156.554642 s : norm dag result 12.0421
 | 
			
		||||
Grid : Message : 156.639265 s : norm dag ref    12.0421
 | 
			
		||||
Grid : Message : 156.888281 s : norm dag diff   7.62057e-14
 | 
			
		||||
Grid : Message : 157.609797 s : Calling Deo and Doe and //assert Deo+Doe == Dunprec
 | 
			
		||||
Grid : Message : 158.208630 s : src_e0.499996
 | 
			
		||||
Grid : Message : 158.162447 s : src_o0.500004
 | 
			
		||||
Grid : Message : 158.267780 s : *********************************************************
 | 
			
		||||
Grid : Message : 158.267791 s : * Benchmarking DomainWallFermionF::DhopEO                
 | 
			
		||||
Grid : Message : 158.267801 s : * Vectorising space-time by 4
 | 
			
		||||
Grid : Message : 158.267811 s : * SINGLE precision 
 | 
			
		||||
Grid : Message : 158.267820 s : * Using Overlapped Comms/Compute
 | 
			
		||||
Grid : Message : 158.267828 s : * Using GENERIC Nc WilsonKernels
 | 
			
		||||
Grid : Message : 158.267836 s : *********************************************************
 | 
			
		||||
Grid : Message : 216.487829 s : Deo mflop/s =   1.37283e+07
 | 
			
		||||
Grid : Message : 216.487869 s : Deo mflop/s per rank   572011
 | 
			
		||||
Grid : Message : 216.487881 s : Deo mflop/s per node   3.43206e+06
 | 
			
		||||
Grid : Message : 216.487893 s : #### Dhop calls report 
 | 
			
		||||
Grid : Message : 216.487903 s : WilsonFermion5D Number of DhopEO Calls   : 3001
 | 
			
		||||
Grid : Message : 216.487913 s : WilsonFermion5D TotalTime   /Calls        : 19399.6 us
 | 
			
		||||
Grid : Message : 216.487923 s : WilsonFermion5D CommTime    /Calls        : 16475.4 us
 | 
			
		||||
Grid : Message : 216.487933 s : WilsonFermion5D FaceTime    /Calls        : 972.393 us
 | 
			
		||||
Grid : Message : 216.487943 s : WilsonFermion5D ComputeTime1/Calls        : 49.8474 us
 | 
			
		||||
Grid : Message : 216.487953 s : WilsonFermion5D ComputeTime2/Calls        : 2089.93 us
 | 
			
		||||
Grid : Message : 216.488001 s : Average mflops/s per call                : 5.39682e+09
 | 
			
		||||
Grid : Message : 216.488011 s : Average mflops/s per call per rank       : 2.24867e+08
 | 
			
		||||
Grid : Message : 216.488020 s : Average mflops/s per call per node       : 1.3492e+09
 | 
			
		||||
Grid : Message : 216.488030 s : Average mflops/s per call (full)         : 1.39479e+07
 | 
			
		||||
Grid : Message : 216.488039 s : Average mflops/s per call per rank (full): 581162
 | 
			
		||||
Grid : Message : 216.488048 s : Average mflops/s per call per node (full): 3.48697e+06
 | 
			
		||||
Grid : Message : 216.488057 s : WilsonFermion5D Stencil
 | 
			
		||||
Grid : Message : 216.488065 s : WilsonFermion5D StencilEven
 | 
			
		||||
Grid : Message : 216.488073 s : WilsonFermion5D StencilOdd
 | 
			
		||||
Grid : Message : 216.488081 s : WilsonFermion5D Stencil     Reporti()
 | 
			
		||||
Grid : Message : 216.488089 s : WilsonFermion5D StencilEven Reporti()
 | 
			
		||||
Grid : Message : 216.488097 s : WilsonFermion5D StencilOdd  Reporti()
 | 
			
		||||
Grid : Message : 217.384495 s : r_e6.02113
 | 
			
		||||
Grid : Message : 217.426121 s : r_o6.02096
 | 
			
		||||
Grid : Message : 217.472636 s : res12.0421
 | 
			
		||||
Grid : Message : 218.200068 s : norm diff   0
 | 
			
		||||
Grid : Message : 218.645673 s : norm diff even  0
 | 
			
		||||
Grid : Message : 218.816561 s : norm diff odd   0
 | 
			
		||||
Grid : Message : MemoryManager::Init() Unified memory space
 | 
			
		||||
Grid : Message : MemoryManager::Init() Using cudaMallocManaged
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Grid : Message : 0.139000 s : ++++++++++++++++++++++++++++++++++++++++++++++++
 | 
			
		||||
Grid : Message : 0.151000 s :  Testing with full communication 
 | 
			
		||||
Grid : Message : 0.158000 s : ++++++++++++++++++++++++++++++++++++++++++++++++
 | 
			
		||||
Grid : Message : 0.165000 s : Grid Layout
 | 
			
		||||
Grid : Message : 0.171000 s : 	Global lattice size  : 64 64 64 96 
 | 
			
		||||
Grid : Message : 0.181000 s : 	OpenMP threads       : 6
 | 
			
		||||
Grid : Message : 0.189000 s : 	MPI tasks            : 2 2 2 3 
 | 
			
		||||
Grid : Message : 0.177717 s : Initialising 4d RNG
 | 
			
		||||
Grid : Message : 0.342461 s : Intialising parallel RNG with unique string 'The 4D RNG'
 | 
			
		||||
Grid : Message : 0.342483 s : Seed SHA256: 49db4542db694e3b1a74bf2592a8c1b83bfebbe18401693c2609a4c3af1
 | 
			
		||||
Grid : Message : 0.370454 s : Initialising 5d RNG
 | 
			
		||||
Grid : Message : 3.174160 s : Intialising parallel RNG with unique string 'The 5D RNG'
 | 
			
		||||
Grid : Message : 3.174420 s : Seed SHA256: b6316f2fac44ce14111f93e0296389330b077bfd0a7b359f781c58589f8a
 | 
			
		||||
Grid : Message : 22.119339 s : Drawing gauge field
 | 
			
		||||
Grid : Message : 38.113060 s : Random gauge initialised 
 | 
			
		||||
Grid : Message : 38.113320 s : Applying BCs for Dirichlet Block5 [0 0 0 0 0]
 | 
			
		||||
Grid : Message : 38.113470 s : Applying BCs for Dirichlet Block4 [0 0 0 0]
 | 
			
		||||
Grid : Message : 43.906786 s : Setting up Cshift based reference 
 | 
			
		||||
 
 | 
			
		||||
@@ -7,16 +7,15 @@
 | 
			
		||||
export OMP_NUM_THREADS=6
 | 
			
		||||
export PAMI_IBV_ADAPTER_AFFINITY=1
 | 
			
		||||
export PAMI_ENABLE_STRIPING=1
 | 
			
		||||
export PAMI_DISABLE_IPC=1
 | 
			
		||||
export OPT="--comms-concurrent --comms-overlap "
 | 
			
		||||
#export GRID_ALLOC_NCACHE_LARGE=1
 | 
			
		||||
export APP="./benchmarks/Benchmark_comms_host_device  --mpi 2.2.2.3 "
 | 
			
		||||
jsrun --nrs 4 -a6 -g6 -c42 -dpacked -b packed:7 --latency_priority gpu-cpu --smpiargs=-gpu $APP > comms.4node
 | 
			
		||||
 | 
			
		||||
APP="./benchmarks/Benchmark_dwf_fp32 --grid 48.48.48.72 --mpi 2.2.2.3 --shm 2048 --shm-force-mpi 1 --device-mem 8000 --shm-force-mpi 1 $OPT "
 | 
			
		||||
jsrun --nrs 4 -a6 -g6 -c42 -dpacked -b packed:7 --latency_priority gpu-cpu --smpiargs=-gpu $APP > dwf.24.4node
 | 
			
		||||
 | 
			
		||||
APP="./benchmarks/Benchmark_dwf_fp32 --grid 64.64.64.96 --mpi 2.2.2.3 --shm 2048 --shm-force-mpi 1 --device-mem 8000 --shm-force-mpi 1 $OPT "
 | 
			
		||||
jsrun --nrs 4 -a6 -g6 -c42 -dpacked -b packed:7 --latency_priority gpu-cpu --smpiargs=-gpu $APP > dwf.32.4node
 | 
			
		||||
APP="./wrap.sh ./benchmarks/Benchmark_dwf_fp32 --grid 48.48.48.72 --mpi 2.2.2.3 --shm 1024 --device-mem 4000 --shm-force-mpi 1 $OPT "
 | 
			
		||||
jsrun --nrs 24 -a1 -g1 -c6 -dpacked -b packed:6 --latency_priority gpu-cpu --smpiargs="-gpu" $APP > dwf.24.4node
 | 
			
		||||
 | 
			
		||||
APP="./wrap.sh ./benchmarks/Benchmark_comms_host_device --grid 48.48.48.72 --mpi 2.2.2.3 --shm 1024 --device-mem 4000 --shm-force-mpi 1 $OPT "
 | 
			
		||||
jsrun  --smpiargs="-gpu"  --nrs 4 -a6 -g6 -c42 -dpacked -b packed:6  $APP > comms.24.4node
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user