Atomicity of MPI_Accumulate[Open-mpi]

22 Views Asked by At

I have been trying to set up a way to send data between processes from a vector (naming it syncer). This vector contains the message in this format : [destination_process, source_process, data, offset]. Here offset is the the index into the array in which the data will have to be placed in the destination process.

All process have their updates they want to send inside their own syncer.

I am calling a method synchronize (vector<vector > &syncer) at the end of a pulse. The code looks like this :

MPI_Win_fence (0, arr) ;
    log_message ("rma cycle has started ") ;

    for (auto &sync_this:syncer) {

        int from_process = sync_this[0] ;
        int to_process = sync_this[1] ;
        int local_offset = sync_this[2] ;
        int value = sync_this[3] ;
        log_message ("from process = " + to_string (from_process) + "to_process = " + to_string (to_process)+"offset = " + to_string(local_offset) + " " + to_string (value)) ;
        // call accumulate from 
        int result = 0;
        MPI_Accumulate (&value, 1, to_process, 1, 1, MPI_INT, MPI_SUM, arr) ;
        log_message ("result of accumulation " + to_string (value) + " == " +to_string (result)) ;
    }

    MPI_Win_fence (0, arr) ;
    MPI_Barrier (MPI_COMM_WORLD) ;
}

The updates are happening in the appropriate indices, but the values are very large. The variable value has the correct values.

I read in the documentation and in a lot of places that MPI_Accumulate is the closest thing to an atomic operation. This should have worked in that case.

If I put a fence inside the for loop, then we run into a deadlock as not all processes having the MPI_Win are participating inside that fence as the syncer vector is irregular across processes.

What is the comon pracitce in this scenario ? What is MPI_Accumulate supposed to behave like and if MPI_Accumulate won't cut it, then what is an alternative.

Thank you very much for your time.

Also, if you could share resources for studying and learning MPI and Distributed Systems, do share. MPI does not seem like a hot topic.

0

There are 0 best solutions below