Parallel collapse + reduction not working

48 Views Asked by At

I am trying to parallelize a two nested loop and the collapse clause fails.

Hey there, I am trying to parallelize these two nested loops in order to calculate two integrals (int_coulomb and int_overlap). My problem comes when I apply the collapse clause, then my code does not manage to recover the result from the in serial.

Furthermore, I added the possibility to launch the fortran program (compiled in parallel) with the selected number of processors. If I do ./program input.inp -omp 1 my code neither works.

On the other hand, when I eliminate the collapse clause, my code is slower than the serial but gives me the correct result.

This is my code:

!$omp parallel do private(i,j,r,dist,invdst,sf,sf0,screen_pot) &
!$omp collapse(2) & 
!$omp reduction(+:int_coulomb,int_overlap) 
!    aceptor
     do i = 1, aceptor%n_points_reduced
        do j = 1, donor%n_points_reduced

           r(1) = (aceptor%xyz(1,i)-donor%xyz(1,j))
           r(2) = (aceptor%xyz(2,i)-donor%xyz(2,j))
           r(3) = (aceptor%xyz(3,i)-donor%xyz(3,j))
!
           dist = dsqrt(DOT_PRODUCT(r,r))
!
!          Skip when grid points are coincident to avoid instabilities
           if (dist.le.1.0e-14) then
              int_coulomb = int_coulomb + zero
              int_overlap = int_overlap + zero
           else
              invdst = one/dist
!
!             Screening function 
              sf  = dist / QMscrnFact

              sf0        = erf(sf)
              screen_pot = sf0
!   
!             Integrate charges as rho_aceptor * rho_donor * (1/dist) * screening
!               --> the density has been already weigthed by the cube volume
              int_coulomb = int_coulomb +&
                            aceptor%rho_reduced(i) * donor%rho_reduced(j) * invdst * screen_pot
!   
!             Aceptor-donor overlap 
              int_overlap = int_overlap +&
                            aceptor%rho_reduced(i) * donor%rho_reduced(j)
!   
           endif      
!
        enddo
     enddo
!$omp end parallel do`

Any ideas?

0

There are 0 best solutions below