Used numba to consume GPU instead of CPU in loop

496 Views Asked by At

I have a list with multiply identities and each identity consists of multiple images. When i am retrieving positive images from json list it works fine. After that I am mixing this positive list with by doing cross product with every images in pair form and then save in negative array. When i am doing cross product, my system got completely hang even i have 16 GB RAM with GPU.

Code

for i in range(0, len(idendities) - 1):
    for j in range(i + 1, len(idendities)):
        # print(samples_list[i], " vs ",samples_list[j])
        cross_product = itertools.product(samples_list[i], samples_list[j])
        cross_product = list(cross_product)
        # print(cross_product)

        for cross_sample in cross_product:
            # print(cross_sample[0], " vs ", cross_sample[1])
            negative = []
            negative.append(cross_sample[0])
            negative.append(cross_sample[1])
            negatives.append(negative)

negatives = pd.DataFrame(negatives, columns=["file_x", "file_y"])
negatives["decision"] = "No"

negatives = negatives.sample(positives.shape[0])
1

There are 1 best solutions below

0
Phoenix On

Numba currently officially supports CUDA-enabled GPUs (AMD support still experimental). The official documentation/instructions to use Numba on GPUs is here.

Depending on what exactly you want to do there are various details outlined in the webpage linked above but if you want to make your code run on the GPU in a simple way you can do something like this:

from numba import cuda

@cuda.jit(device=True):
def foo(a: int, b: int, c: int):
    return a * b + c #this can be whatever you want that is Numba compatible