How to process data larger than memory in R locally?

164 Views Asked by At

I have 83GB of data in the shape of CSVs stored in AWS S3. Usually I would use a Spark implementation in R (sparklyr) in AWS EMR to process the job granting enough memory.

However, I am not able to access my AWS EMR account (for the purpose of this question, imagine I will not be able to in the near future). What are some good alternatives in R to do the same processing without my memory crashing (even if it takes long)?

0

There are 0 best solutions below