There is an excellent answer from Orwellophile, which does include a pure bash option (function rawurlencode), which I've used on my website (shell based CGI script, large number of URLS in response to search requests). The only draw back was high CPU during peak time.
I've found a modified solution, leverage bash "global replace" feature. With this solution processing time for url encode is 4X faster. The solution identify the characters to be escaped, and uses the "global replace" operator (${var//source/replacement}) to process all substitutions. The speed up is clearly from using bash internal loops, over explicit loop.
Performance: On core i3-8100 3.60Ghz. Test case: 1000 URL from stack overflow, similar to this ticket: "https://stackoverflow.com/questions/296536/how-to-urlencode-data-for-curl-command".
url_encode()
{
local key="${1}" varname="${2:-_rval}" prefix="${3:-_ENCKEY_}"
local unsafe=${key//[-_.~a-zA-Z0-9 ]/}
local -i key_len=${#unsafe}
local ch ch1 ch0
while [ "$unsafe" ] ;do
ch=${unsafe:0:1}
ch0="\\$ch"
printf -v ch1 '%%%02x' "'$ch'"
key=${key//$ch0/"$ch1"}
unsafe=${unsafe//"$ch0"}
done
key=${key// /+}
REPLY="$key"
# printf "%s" "$REPLY"
return 0
}
As a minor extra, it uses '+' to encode the space. Slightly more compact URL.
Benchmark:
function t {
local key
for (( i=1 ; i<=$1 ; i++ )) do url_encode "$2" kkk2 ; done
echo "K=$REPLY"
}
t 1000 "https://stackoverflow.com/questions/296536/how-to-urlencode-data-for-curl-command"