I have some code where I am trying to communicate data across processors. I have a nested list of neighbors, in which each element reperesents a subsection of the data, and the elements of that list are integer values of the MPI process assigned to it. For example, dividing the data into 3 subsections each with 1 MPI process -> neighbors=[[0],[1],[2]].
The subsections of the problem work usually work independently, but every so often I need to communicate pieces of the data between the subections. So each processor loops through its neighboring subsections of data and the processors assigned to it and makes a nonblocking send (isend) and blocking receive (recv). The code seems to be working, except when I try to append the received data to one large list.
Here is the blocking receive section of the code:
bankr = []
# for each neighbor (subsection of data)
for name in neighbors:
# for each processor asigned to that neighbor
for source in neighbors[name]:
# blocking test for a message:
if MPI.COMM_WORLD.probe(source=source):
received = MPI.COMM_WORLD.recv(source=source)
print('rank ', mcdc["mpi_rank"], "received shape ", received.shape)
# bankr.extend(received)
When the last line bankr.extend(received) is left out, it seems everything runs as it should. For example, if I were to split up the computing into 3 subsections each with one processor assigned, the neighbors variable would look like neighbors=[[0],[1],[2]] and I get the output:
rank 0 sent message to 1
rank 0 received shape (8,)
rank 1 sent message to 2
rank 1 sent message to 0
rank 1 received shape (49,)
rank 1 received shape (53,)
rank 2 sent message to 1
rank 2 received shape (9,)
Which is what I would expect. But if I uncomment the last line bankr.extend(received) where I'm simply trying to append all of the data received to one large list the code freezes after printing
rank 0 sent message to 1
rank 0 received shape (8,)
Is there something obvious I'm doing wrong?