I have 83GB of data in the shape of CSVs stored in AWS S3. Usually I would use a Spark implementation in R (sparklyr) in AWS EMR to process the job granting enough memory.
However, I am not able to access my AWS EMR account (for the purpose of this question, imagine I will not be able to in the near future). What are some good alternatives in R to do the same processing without my memory crashing (even if it takes long)?