[apache-spark] Is it possible to get the current spark context settings in PySpark?

Suppose I want to increase the driver memory in runtime using Spark Session:

s2 = SparkSession.builder.config("spark.driver.memory", "29g").getOrCreate()

Now I want to view the updated settings:

s2.conf.get("spark.driver.memory")

To get all the settings, you can make use of spark.sparkContext._conf.getAll()

UPDATED SETTINGS

Hope this helps

Examples related to apache-spark

Select Specific Columns from Spark DataFrame Select columns in PySpark dataframe What is the difference between spark.sql.shuffle.partitions and spark.default.parallelism? How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Spark dataframe: collect () vs select () How does createOrReplaceTempView work in Spark? Spark difference between reduceByKey vs groupByKey vs aggregateByKey vs combineByKey Filter df when values matches part of a string in pyspark Filtering a pyspark dataframe using isin by exclusion Convert date from String to Date format in Dataframes

Examples related to config

How to create multiple output paths in Webpack config Get environment value in controller Is it possible to get the current spark context settings in PySpark? How to check if ZooKeeper is running or up from command prompt? Spring Boot and multiple external configuration files How change default SVN username and password to commit changes? How to use ConfigurationManager Python: How would you save a simple settings/config file? Using logging in multiple modules Why Git is not allowing me to commit even after configuration?

Examples related to pyspark

Pyspark: Filter dataframe based on multiple conditions How to convert column with string type to int form in pyspark data frame? Select columns in PySpark dataframe How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Filter df when values matches part of a string in pyspark Filtering a pyspark dataframe using isin by exclusion PySpark: withColumn() with two conditions and three outcomes How to get name of dataframe column in pyspark? Spark RDD to DataFrame python PySpark 2.0 The size or shape of a DataFrame