使用MPI_Scatterv分散数组的重叠区域

我有一个二维数组的一维数组表示:下面是一个6×6的例子:

[00000012300456700890100234500000] => [------] [|0123|] [|4567|] [|8901|] [|2345|] [------] 

典型尺寸为514 * 514个元素(512 + 2个晕圈单元)。 我必须在四个处理器之间分配数据:

 Rank 0: Rank 1: Rank 2: Rank 3: [----] [----] [|456] [567|] [|012] [123|] [|890] [901|] [|456] [567|] [|234] [345|] [|890] [901|] [----] [----] 

也就是说,进入排名0的数据的最右边部分也必须到达排名为1的数据的左边部分,依此类推,与所有其他邻居对。

我知道如何制作大小为4×4的数据类型,但不知道如何重新发送该数组的最后一个元素作为新数组的开头到另一个等级。

如何分发具有重叠的数据?

===编辑===

使用它之后Jonathon ……

我正在尝试使用char数组(2D)执行此操作,但是当从处理器/队列中收集它们时,我会收到“垃圾”。 我改变了类型和一切,但无法弄清楚问题出在哪里。

 void distributeBySend_c(unsigned char **global, const int globalrows, const int globalcols, const int localrows, const int localcols, const int rank, const int size, MPI_Comm cartcomm, const int dims[2], const int coords[2]) { MPI_Request reqs[dims[0]*dims[1]]; const int tag = 1; if (rank == 0) { MPI_Datatype block; int starts[2] = {0,0}; int subsizes[2] = {localrows, localcols}; int sizes[2] = {globalrows, globalcols}; MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_C, MPI_CHAR, &block); MPI_Type_commit(&block); int reqno=0; for (int i=0; i<dims[0]; i++) { int startrow = i*datasize; int destcoords[2]; destcoords[0] = i; for (int j=0; j<dims[1]; j++) { int startcol = j*datasize; destcoords[1] = j; int dest; MPI_Cart_rank(cartcomm, destcoords, &dest); MPI_Isend(&(global[startrow][startcol]), 1, block, dest, tag, cartcomm, &reqs[reqno++]); } } } unsigned char **local = alloc2dImage(localrows, localcols); MPI_Recv(&(local[0][0]), localrows*localcols, MPI_CHAR, 0, tag, cartcomm, MPI_STATUS_IGNORE); if (rank == 0) MPI_Waitall(dims[0]*dims[1], reqs, MPI_STATUS_IGNORE); eachprintarr_c(local, localrows, localcols, rank, size); } 

我得到的结果是:

 --- Rank 0: ? ? ? ? Rank 0:   ' V   Rank 0:   ' V   Rank 0:   ' V   --- Rank 1: ? ? ? ? Rank 1:       % Rank 1:       % Rank 1:       % --- Rank 2: ? + + + Rank 2:         Rank 2:         Rank 2:         --- Rank 3: + + + ? Rank 3:       Rank 3:       Rank 3:       

不幸的是,您无法使用MPI_Scatterv执行此操作,因为发送的数据会重叠。

你最好的选择是使用Sends / Recvs手动执行此操作,这很容易,但不能很好地扩展; 或仅散布“内部”数据,让处理器进行典型的保护单元/光环交换,以获得所需的重叠数据。 两种方式编码:

 #include  #include  #include  #include "mpi.h" char **alloc2d(const int n, const int m); void free2d(char **p); void printarr(const char **const arr, const int n, const int m, const char *pref); void eachprintarr(const char **const arr, const int n, const int m, const int myrank, const int size); const int datasize = 2; const int halosize = 1; void distributeBySend(const char **const global, const int globalrows, const int globalcols, const int localrows, const int localcols, const int rank, const int size, MPI_Comm cartcomm, const int dims[2], const int coords[2]) { MPI_Request reqs[dims[0]*dims[1]]; const int tag = 1; if (rank == 0) { MPI_Datatype block; int starts[2] = {0,0}; int subsizes[2] = {localrows, localcols}; int sizes[2] = {globalrows, globalcols}; MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_C, MPI_CHAR, &block); MPI_Type_commit(&block); int reqno=0; for (int i=0; i 'z') val = 'a'; } printf("Global array: ---\n"); printarr((const char ** const)global, globalrows, globalcols, ""); } if (argv[1] && !strcmp(argv[1],"sendrecv")) { if (rank == 0) printf("---\nDistributing with Send/Recv:---\n"); distributeBySend((const char **const) global, globalrows, globalcols, localrows, localcols, rank, size, cartcomm, dims, coords); } else { if (rank == 0) printf("---\nDistributing with Scatter/exchange:---\n"); scatterAndExchange((const char **const)global, globalrows, globalcols, localrows, localcols, rank, size, cartcomm, dims, coords); } MPI_Finalize(); return 0; } char **alloc2d(const int n, const int m) { char *data = malloc( n*m * sizeof(int) ); char **ptrs = malloc( n*sizeof(int *) ); for (int i=0; i 

跑步给

 $ mpirun -np 4 ./scatter sendrecv Global array: --- ...... .abcd. .efgh. .ijkl. .mnop. ...... --- Distributing with Send/Recv:--- --- Rank 0: .... Rank 0: .abc Rank 0: .efg Rank 0: .ijk --- Rank 1: .... Rank 1: bcd. Rank 1: fgh. Rank 1: jkl. --- Rank 2: .efg Rank 2: .ijk Rank 2: .mno Rank 2: .... --- Rank 3: fgh. Rank 3: jkl. Rank 3: nop. Rank 3: .... $ mpirun -np 4 ./scatter scatter Global array: --- ...... .abcd. .efgh. .ijkl. .mnop. ...... --- Distributing with Scatter/exchange:--- --- Rank 0: .... Rank 0: .abc Rank 0: .efg Rank 0: .ijk --- Rank 1: .... Rank 1: bcd. Rank 1: fgh. Rank 1: jkl. --- Rank 2: .efg Rank 2: .ijk Rank 2: .mno Rank 2: .... --- Rank 3: fgh. Rank 3: jkl. Rank 3: nop. Rank 3: ....