Is there some sort of character limit imposed in bash (or other shells) for how long an input can be? If so, what is that character limit?
I.e. Is it possible to write a command in bash that is too long for the command line to execute? If there is not a required limit, is there a suggested limit?
This question is related to
bash
shell
unix
command-line-arguments
There is a buffer limit of something like 1024. The read will simply hang mid paste or input. To solve this use the -e option.
http://linuxcommand.org/lc3_man_pages/readh.html
-e use Readline to obtain the line in an interactive shell
Change your read to read -e and annoying line input hang goes away.
Ok, Denizens. So I have accepted the command line length limits as gospel for quite some time. So, what to do with one's assumptions? Naturally- check them.
I have a Fedora 22 machine at my disposal (meaning: Linux with bash4). I have created a directory with 500,000 inodes (files) in it each of 18 characters long. The command line length is 9,500,000 characters. Created thus:
seq 1 500000 | while read digit; do
touch $(printf "abigfilename%06d\n" $digit);
done
And we note:
$ getconf ARG_MAX
2097152
Note however I can do this:
$ echo * > /dev/null
But this fails:
$ /bin/echo * > /dev/null
bash: /bin/echo: Argument list too long
I can run a for loop:
$ for f in *; do :; done
which is another shell builtin.
Careful reading of the documentation for ARG_MAX
states, Maximum length of argument to the exec functions. This means: Without calling exec
, there is no ARG_MAX
limitation. So it would explain why shell builtins are not restricted by ARG_MAX
.
And indeed, I can ls
my directory if my argument list is 109948 files long, or about 2,089,000 characters (give or take). Once I add one more 18-character filename file, though, then I get an Argument list too long error. So ARG_MAX
is working as advertised: the exec is failing with more than ARG_MAX
characters on the argument list- including, it should be noted, the environment data.
Source: Stackoverflow.com