[out-of-memory] Spark java.lang.OutOfMemoryError: Java heap space

I suffered from this issue a lot when using dynamic resource allocation. I had thought it would utilize my cluster resources to best fit the application.

But the truth is the dynamic resource allocation doesn't set the driver memory and keeps it to its default value, which is 1G.

I resolved this issue by setting spark.driver.memory to a number that suits my driver's memory (for 32GB ram I set it to 18G).

You can set it using spark submit command as follows:

spark-submit --conf spark.driver.memory=18g

Very important note, this property will not be taken into consideration if you set it from code, according to Spark Documentation - Dynamically Loading Spark Properties:

Spark properties mainly can be divided into two kinds: one is related to deploy, like “spark.driver.memory”, “spark.executor.instances”, this kind of properties may not be affected when setting programmatically through SparkConf in runtime, or the behavior is depending on which cluster manager and deploy mode you choose, so it would be suggested to set through configuration file or spark-submit command line options; another is mainly related to Spark runtime control, like “spark.task.maxFailures”, this kind of properties can be set in either way.