[bash] Execute command on all files in a directory

Could somebody please provide the code to do the following: Assume there is a directory of files, all of which need to be run through a program. The program outputs the results to standard out. I need a script that will go into a directory, execute the command on each file, and concat the output into one big output file.

For instance, to run the command on 1 file:

$ cmd [option] [filename] > results.out

This question is related to bash scripting

The answer is


I'm doing this on my raspberry pi from the command line by running:

for i in *;do omxplayer "$i";done

One quick and dirty way which gets the job done sometimes is:

find directory/ | xargs  Command 

For example to find number of lines in all files in the current directory, you can do:

find . | xargs wc -l

The accepted/high-voted answers are great, but they are lacking a few nitty-gritty details. This post covers the cases on how to better handle when the shell path-name expansion (glob) fails, when filenames contain embedded newlines/dash symbols and moving the command output re-direction out of the for-loop when writing the results to a file.

When running the shell glob expansion using * there is a possibility for the expansion to fail if there are no files present in the directory and an un-expanded glob string will be passed to the command to be run, which could have undesirable results. The bash shell provides an extended shell option for this using nullglob. So the loop basically becomes as follows inside the directory containing your files

 shopt -s nullglob

 for file in ./*; do
     cmdToRun [option] -- "$file"
 done

This lets you safely exit the for loop when the expression ./* doesn't return any files (if the directory is empty)

or in a POSIX compliant way (nullglob is bash specific)

 for file in ./*; do
     [ -f "$file" ] || continue
     cmdToRun [option] -- "$file"
 done

This lets you go inside the loop when the expression fails for once and the condition [ -f "$file" ] check if the un-expanded string ./* is a valid filename in that directory, which wouldn't be. So on this condition failure, using continue we resume back to the for loop which won't run subsequently.

Also note the usage of -- just before passing the file name argument. This is needed because as noted previously, the shell filenames can contain dashes anywhere in the filename. Some of the shell commands interpret that and treat them as a command option when the name are not quoted properly and executes the command thinking if the flag is provided.

The -- signals the end of command line options in that case which means, the command shouldn't parse any strings beyond this point as command flags but only as filenames.


Double-quoting the filenames properly solves the cases when the names contain glob characters or white-spaces. But *nix filenames can also contain newlines in them. So we de-limit filenames with the only character that cannot be part of a valid filename - the null byte (\0). Since bash internally uses C style strings in which the null bytes are used to indicate the end of string, it is the right candidate for this.

So using the printf option of shell to delimit files with this NULL byte using the -d option of read command, we can do below

( shopt -s nullglob; printf '%s\0' ./* ) | while read -rd '' file; do
    cmdToRun [option] -- "$file"
done

The nullglob and the printf are wrapped around (..) which means they are basically run in a sub-shell (child shell), because to avoid the nullglob option to reflect on the parent shell, once the command exits. The -d '' option of read command is not POSIX compliant, so needs a bash shell for this to be done. Using find command this can be done as

while IFS= read -r -d '' file; do
    cmdToRun [option] -- "$file"
done < <(find -maxdepth 1 -type f -print0)

For find implementations that don't support -print0 (other than the GNU and the FreeBSD implementations), this can be emulated using printf

find . -maxdepth 1 -type f -exec printf '%s\0' {} \; | xargs -0 cmdToRun [option] --

Another important fix is to move the re-direction out of the for-loop to reduce a high number of file I/O. When used inside the loop, the shell has to execute system-calls twice for each iteration of the for-loop, once for opening and once for closing the file descriptor associated with the file. This will become a bottle-neck on your performance for running large iterations. Recommended suggestion would be to move it outside the loop.

Extending the above code with this fixes, you could do

( shopt -s nullglob; printf '%s\0' ./* ) | while read -rd '' file; do
    cmdToRun [option] -- "$file"
done > results.out

which will basically put the contents of your command for each iteration of your file input to stdout and when the loop ends, open the target file once for writing the contents of the stdout and saving it. The equivalent find version of the same would be

while IFS= read -r -d '' file; do
    cmdToRun [option] -- "$file"
done < <(find -maxdepth 1 -type f -print0) > results.out

You can use xarg:

ls | xargs -L 1 -d '\n' your-desired-command 
  • -L 1 causes pass 1 item at a time

  • -d '\n' splits the output of ls based on new line.


Maxdepth

I found it works nicely with Jim Lewis's answer just add a bit like this:

$ export DIR=/path/dir && cd $DIR && chmod -R +x *
$ find . -maxdepth 1 -type f -name '*.sh' -exec {} \; > results.out

Sort Order

If you want to execute in sort order, modify it like this:

$ export DIR=/path/dir && cd $DIR && chmod -R +x *
find . -maxdepth 2 -type f -name '*.sh' | sort | bash > results.out

Just for an example, this will execute with following order:

bash: 1: ./assets/main.sh
bash: 2: ./builder/clean.sh
bash: 3: ./builder/concept/compose.sh
bash: 4: ./builder/concept/market.sh
bash: 5: ./builder/concept/services.sh
bash: 6: ./builder/curl.sh
bash: 7: ./builder/identity.sh
bash: 8: ./concept/compose.sh
bash: 9: ./concept/market.sh
bash: 10: ./concept/services.sh
bash: 11: ./product/compose.sh
bash: 12: ./product/market.sh
bash: 13: ./product/services.sh
bash: 14: ./xferlog.sh

Unlimited Depth

If you want to execute in unlimited depth by certain condition, you can use this:

export DIR=/path/dir && cd $DIR && chmod -R +x *
find . -type f -name '*.sh' | sort | bash > results.out

then put on top of each files in the child directories like this:

#!/bin/bash
[[ "$(dirname `pwd`)" == $DIR ]] && echo "Executing `realpath $0`.." || return

and somewhere in the body of parent file:

if <a condition is matched>
then
    #execute child files
    export DIR=`pwd`
fi

Based on @Jim Lewis's approach:

Here is a quick solution using find and also sorting files by their modification date:

$ find  directory/ -maxdepth 1 -type f -print0 | \
  xargs -r0 stat -c "%y %n" | \
  sort | cut -d' ' -f4- | \
  xargs -d "\n" -I{} cmd -op1 {} 

For sorting see:

http://www.commandlinefu.com/commands/view/5720/find-files-and-list-them-sorted-by-modification-time


I needed to copy all .md files from one directory into another, so here is what I did.

for i in **/*.md;do mkdir -p ../docs/"$i" && rm -r ../docs/"$i" && cp "$i" "../docs/$i" && echo "$i -> ../docs/$i"; done

Which is pretty hard to read, so lets break it down.

first cd into the directory with your files,

for i in **/*.md; for each file in your pattern

mkdir -p ../docs/"$i"make that directory in a docs folder outside of folder containing your files. Which creates an extra folder with the same name as that file.

rm -r ../docs/"$i" remove the extra folder that is created as a result of mkdir -p

cp "$i" "../docs/$i" Copy the actual file

echo "$i -> ../docs/$i" Echo what you did

; done Live happily ever after


i think the simple solution is:

sh /dir/* > ./result.txt

How about this:

find /some/directory -maxdepth 1 -type f -exec cmd option {} \; > results.out
  • -maxdepth 1 argument prevents find from recursively descending into any subdirectories. (If you want such nested directories to get processed, you can omit this.)
  • -type -f specifies that only plain files will be processed.
  • -exec cmd option {} tells it to run cmd with the specified option for each file found, with the filename substituted for {}
  • \; denotes the end of the command.
  • Finally, the output from all the individual cmd executions is redirected to results.out

However, if you care about the order in which the files are processed, you might be better off writing a loop. I think find processes the files in inode order (though I could be wrong about that), which may not be what you want.