I am running the following for loop for the gwr.basic function in the GWmodel package in R. What I need to do is to collect the mean of estimate parameter for any given bandwidth.
the code looks like:
library(GWmodel)
data("DubVoter")
#Dub.voter
LARentMean = list()
for (i in 20:21)
{
gwr.res <- gwr.basic(GenEl2004 ~ DiffAdd + LARent + SC1 + Unempl + LowEduc + Age18_24 + Age25_44 + Age45_64, data = Dub.voter, bw = i, kernel = "bisquare", adaptive = TRUE, F123.test = TRUE)
a <- mean(gwr.res$SDF$LARent)
LARentMean[i] <- a
}
outcome = unlist(LARentMean)
> outcome
[1] -0.1117668 -0.1099969
However it is terribly slow at returning the result. I need a much wider range such as 20:200. Is there a way to speed the process up? If not, how to have a stepped range let's say 20 to 200 with steps of 5 to reduce the number of operations?
I am a python user new to R. I read on SO that R is well known for being slow at for loops and that there are more efficient alternatives. More clarity on this point would be welcomed.
I got the same impression like @musically_ut. The for loop and the traditional
for-vs.applydebate is unlikely to help you here. Try to go for parallelization if you got more than one core. There are several packages likeparallelorsnowfall. Which package is ultimately the best and fastest depends on your machine and operating system.Best does not always equal fastest here. A code that works cross-platform and can be worth more than a bit of extra performance. Also transparency and ease of use can outweigh maximum speed. That being said I like the standard solution a lot and would recommend to use
parallelwhich ships with R and works on Windows, OSX and Linux.EDIT: here's the fully reproducible example using the OP's example.