Based on my understanding, when YARN allocates containers based on spark configuration ask, YARN automatically rounds up the container size in multiples of 'yarn.scheduler.minimum-allocation-mb'.
For Example,
yarn.scheduler.minimum-allocation-mb: 2 GB
spark.executor.memory: 4 GB
spark.yarn.executor.memoryOverhead: 384 MB
overall spark executor memory ask to YARN is [4 GB + 384 MB] = 4.384 GB
Spark places request to YARN for 4.384 GB container size however YARN allocates containers of size in multiplies of 2 GB(yarn.scheduler.minimum-allocation-mb) and hence in this case it returns the container of size 6 GB (rounding 4.384 to 6). Hence spark executor JVM is started with 6 GB size inside YARN container.
So, Original spark-executor-memory ask was = 4.384 GB however YARN allocated memory is = 6 GB
Increment in executor size = 6 - 4.384 GB = 1.6 GB
My question is, as I understand 1.6 GB added to each spark executor overall memory which comprises of executor memory and overhead memory, Which part of overall spark executor memory is increased by 1.6 GB. Is it the spark.yarn.executor.memoryOverhead (or) spark.executor.memory ? How does spark uses the extra memory received due to YARN round up?