Can I predict FFTW's computation cycles based on 2D array size?

15 Views Asked by At

I'm dealing with large [n, m] datasets and using FFTW's dft_2d functionality. These dataset sizes vary from measurement to measurement. Some of my datasets take minutes to compute. Others only take milliseconds.

Since my datasets are large, it doesn't affect me significantly if I drop a row or drop a column to make the datasets more FFTW-friendly.

Is there a way to calculate a measure of "FFTW-friendliness" via FFTW function calls? Perhaps some way of determining the number and sizes of prime factors and determining a kind of "cost" value from that info?

0

There are 0 best solutions below