Reader's Digest: (Spark 2.x)
For Example, if you have 3 directories holding csv files:
dir1, dir2, dir3
You then define paths as a string of comma delimited list of paths as follows:
paths = "dir1/,dir2/,dir3/*"
Then use the following function and pass it this paths variable
def get_df_from_csv_paths(paths):
df = spark.read.format("csv").option("header", "false").\
schema(custom_schema).\
option('delimiter', '\t').\
option('mode', 'DROPMALFORMED').\
load(paths.split(','))
return df
By then running:
df = get_df_from_csv_paths(paths)
You will obtain in df a single spark dataframe containing the data from all the csvs found in these 3 directories.
===========================================================================
Full Version:
In case you want to ingest multiple CSVs from multiple directories you simply need to pass a list and use wildcards.
For Example:
if your data_path looks like this:
's3://bucket_name/subbucket_name/2016-09-*/184/*,
s3://bucket_name/subbucket_name/2016-10-*/184/*,
s3://bucket_name/subbucket_name/2016-11-*/184/*,
s3://bucket_name/subbucket_name/2016-12-*/184/*, ... '
you can use the above function to ingest all the csvs in all these directories and subdirectories at once:
This would ingest all directories in s3 bucket_name/subbucket_name/ according to the wildcard patterns specified. e.g. the first pattern would look in
bucket_name/subbucket_name/
for all directories with names starting with
2016-09-
and for each of those take only the directory named
184
and within that subdirectory look for all csv files.
And this would be executed for each of the patterns in the comma delimited list.
This works way better than union..