The problem solutions are fine but I found some problems in both:
As Carter Shanklin said, with this command we will obtain a csv file with the results of the query in the path specified:
insert overwrite local directory '/home/carter/staging' row format delimited fields terminated by ',' select * from hugetable;
The problem with this solution is that the csv obtained won´t have headers and will create a file that is not a CSV (so we have to rename it).
As user1922900 said, with the following command we will obtain a CSV files with the results of the query in the specified file and with headers:
hive -e 'select * from some_table' | sed 's/[\t]/,/g' > /home/yourfile.csv
With this solution we will get a CSV file with the result rows of our query, but with log messages between these rows too. As a solution of this problem I tried this, but without results.
So, to solve all these issues I created a script that execute a list of queries, create a folder (with a timestamp) where it stores the results, rename the files obtained, remove the unnecesay files and it also add the respective headers.
#!/bin/sh
QUERIES=("select * from table1" "select * from table2")
IFS=""
directoryname=$(echo "ScriptResults$timestamp")
mkdir $directoryname
counter=1
for query in ${QUERIES[*]}
do
tablename="query"$counter
hive -S -e "INSERT OVERWRITE LOCAL DIRECTORY '/data/2/DOMAIN_USERS/SANUK/users/$USER/$tablename' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' $query ;"
hive -S -e "set hive.cli.print.header=true; $query limit 1" | head -1 | sed 's/[\t]/,/g' >> /data/2/DOMAIN_USERS/SANUK/users/$USER/$tablename/header.csv
mv $tablename/000000_0 $tablename/$tablename.csv
cat $tablename/$tablename.csv >> $tablename/header.csv.
rm $tablename/$tablename.csv
mv $tablename/header.csv $tablename/$tablename.csv
mv $tablename/$tablename.csv $directoryname
counter=$((counter+1))
rm -rf $tablename/
done