repartition() creates partition in memory and is used as a read() operation. partitionBy() creates partition in disk and is used as a write operation.
- How can we confirm there is multiple files in memory while using repartition()
- If repartition only creates partition in memory
articles.repartition(1).write.saveAsTable("articles_table", format = 'orc', mode = 'overwrite'), why does this operation only creates one file? And how is this different from partitionBy()?
partitionByindeed has an effect on how your files will look on disk, and indeed is used when writing a file (it is a method of theDataFrameWriterclass).That, however, does not mean that the
repartitionhas no effect at all on what will be written to disk.Let's take the following example:
In here, we create a simple dataframe and write it away using 2 methods:
1) By using
partitionByThe output file structure on disk looks like this:
We see that this created 6 "subfiles", in 4 "subdirectories". Information about the data values (like
colA=1) is actually stored on disk. This enables you to do big improvements in subsequent queries that would need to read this file. Imagine that you would need to read all the values wherecolA=1, that would be a trivial task (ignore the other subdirectories).2) By using
repartition(4)The output file structure on disk looks like this:
We see that 4 "subfiles" were created and NO "subdirectories" were made. Actually these "subfiles" represent your partitions inside of Spark. Since you're typically working with very big data in Spark, all your data has to be partitioned some way.
Each partition will be processed by 1 task, which can be taken up by 1 core of your cluster. Once this task is taken up by a core and after doing all the necessary processing, your core will write away this output on disk in one of these "subfiles". When it has finished writing away this "subfile", your core is ready to read another partition.
When to use
partitionByandrepartitionThis is a bit opinionated and surely not exhaustive, but might give you some insight into what to use.
partitionByandrepartitioncan be used for different goals:partitionByif:repartitionif:partitionByon any column would have a way too high cardinality (imagine time series data on sensors)