[bash] Quick-and-dirty way to ensure only one instance of a shell script is running at a time

What's a quick-and-dirty way to make sure that only one instance of a shell script is running at a given time?

This question is related to bash shell process lockfile

The answer is


If flock's limitations, which have already been described elsewhere on this thread, aren't an issue for you, then this should work:

#!/bin/bash

{
    # exit if we are unable to obtain a lock; this would happen if 
    # the script is already running elsewhere
    # note: -x (exclusive) is the default
    flock -n 100 || exit

    # put commands to run here
    sleep 100
} 100>/tmp/myjob.lock 

Late to the party, using the idea from @Majal, this is my script to start only one instance of emacsclient GUI. With it, I can set shortcutkey to open or jump back to the same emacsclient. I have another script to call emacsclient in terminals when I need it. The use of emacsclient here is just to show a working example, one can choose something else. This approach is quick and good enough for my tiny scripts. Tell me where it is dirty :)

#!/bin/bash

# if [ $(pgrep -c $(basename $0)) -lt 2 ]; then # this works but requires script name to be unique
if [ $(pidof -x "$0"|wc -w ) -lt 3 ]; then
    echo -e "Starting $(basename $0)"
    emacsclient --alternate-editor="" -c "$@"
else
    echo -e "$0 is running already"
fi

This one line answer comes from someone related Ask Ubuntu Q&A:

[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :
#     This is useful boilerplate code for shell scripts.  Put it at the top  of
#     the  shell script you want to lock and it'll automatically lock itself on
#     the first run.  If the env var $FLOCKER is not set to  the  shell  script
#     that  is being run, then execute flock and grab an exclusive non-blocking
#     lock (using the script itself as the lock file) before re-execing  itself
#     with  the right arguments.  It also sets the FLOCKER env var to the right
#     value so it doesn't run again.

I have a simple solution based on the file name

#!/bin/bash

MY_FILENAME=`basename "$BASH_SOURCE"`

MY_PROCESS_COUNT=$(ps a -o pid,cmd | grep $MY_FILENAME | grep -v grep | grep -v $$ | wc -
l)

if [ $MY_PROCESS_COUNT -ne 0  ]; then
  echo found another process
  exit 0
if

# Follows the code to get the job done.

Another option is to use shell's noclobber option by running set -C. Then > will fail if the file already exists.

In brief:

set -C
lockfile="/tmp/locktest.lock"
if echo "$$" > "$lockfile"; then
    echo "Successfully acquired lock"
    # do work
    rm "$lockfile"    # XXX or via trap - see below
else
    echo "Cannot acquire lock - already locked by $(cat "$lockfile")"
fi

This causes the shell to call:

open(pathname, O_CREAT|O_EXCL)

which atomically creates the file or fails if the file already exists.


According to a comment on BashFAQ 045, this may fail in ksh88, but it works in all my shells:

$ strace -e trace=creat,open -f /bin/bash /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_LARGEFILE, 0666) = 3

$ strace -e trace=creat,open -f /bin/zsh /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_NOCTTY|O_LARGEFILE, 0666) = 3

$ strace -e trace=creat,open -f /bin/pdksh /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_TRUNC|O_LARGEFILE, 0666) = 3

$ strace -e trace=creat,open -f /bin/dash /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_LARGEFILE, 0666) = 3

Interesting that pdksh adds the O_TRUNC flag, but obviously it's redundant:
either you're creating an empty file, or you're not doing anything.


How you do the rm depends on how you want unclean exits to be handled.

Delete on clean exit

New runs fail until the issue that caused the unclean exit to be resolved and the lockfile is manually removed.

# acquire lock
# do work (code here may call exit, etc.)
rm "$lockfile"

Delete on any exit

New runs succeed provided the script is not already running.

trap 'rm "$lockfile"' EXIT

Using the process's lock is much stronger and takes care of the ungraceful exits also. lock_file is kept open as long as the process is running. It will be closed (by shell) once the process exists (even if it gets killed). I found this to be very efficient:

lock_file=/tmp/`basename $0`.lock

if fuser $lock_file > /dev/null 2>&1; then
    echo "WARNING: Other instance of $(basename $0) running."
    exit 1
fi
exec 3> $lock_file 

PID and lockfiles are definitely the most reliable. When you attempt to run the program, it can check for the lockfile which and if it exists, it can use ps to see if the process is still running. If it's not, the script can start, updating the PID in the lockfile to its own.


An example with flock(1) but without subshell. flock()ed file /tmp/foo is never removed, but that doesn't matter as it gets flock() and un-flock()ed.

#!/bin/bash

exec 9<> /tmp/foo
flock -n 9
RET=$?
if [[ $RET -ne 0 ]] ; then
    echo "lock failed, exiting"
    exit
fi

#Now we are inside the "critical section"
echo "inside lock"
sleep 5
exec 9>&- #close fd 9, and release lock

#The part below is outside the critical section (the lock)
echo "lock released"
sleep 5

When targeting a Debian machine I find the lockfile-progs package to be a good solution. procmail also comes with a lockfile tool. However sometimes I am stuck with neither of these.

Here's my solution which uses mkdir for atomic-ness and a PID file to detect stale locks. This code is currently in production on a Cygwin setup and works well.

To use it simply call exclusive_lock_require when you need get exclusive access to something. An optional lock name parameter lets you share locks between different scripts. There's also two lower level functions (exclusive_lock_try and exclusive_lock_retry) should you need something more complex.

function exclusive_lock_try() # [lockname]
{

    local LOCK_NAME="${1:-`basename $0`}"

    LOCK_DIR="/tmp/.${LOCK_NAME}.lock"
    local LOCK_PID_FILE="${LOCK_DIR}/${LOCK_NAME}.pid"

    if [ -e "$LOCK_DIR" ]
    then
        local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
        if [ ! -z "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2> /dev/null
        then
            # locked by non-dead process
            echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
            return 1
        else
            # orphaned lock, take it over
            ( echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null && local LOCK_PID="$$"
        fi
    fi
    if [ "`trap -p EXIT`" != "" ]
    then
        # already have an EXIT trap
        echo "Cannot get lock, already have an EXIT trap"
        return 1
    fi
    if [ "$LOCK_PID" != "$$" ] &&
        ! ( umask 077 && mkdir "$LOCK_DIR" && umask 177 && echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null
    then
        local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
        # unable to acquire lock, new process got in first
        echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
        return 1
    fi
    trap "/bin/rm -rf \"$LOCK_DIR\"; exit;" EXIT

    return 0 # got lock

}

function exclusive_lock_retry() # [lockname] [retries] [delay]
{

    local LOCK_NAME="$1"
    local MAX_TRIES="${2:-5}"
    local DELAY="${3:-2}"

    local TRIES=0
    local LOCK_RETVAL

    while [ "$TRIES" -lt "$MAX_TRIES" ]
    do

        if [ "$TRIES" -gt 0 ]
        then
            sleep "$DELAY"
        fi
        local TRIES=$(( $TRIES + 1 ))

        if [ "$TRIES" -lt "$MAX_TRIES" ]
        then
            exclusive_lock_try "$LOCK_NAME" > /dev/null
        else
            exclusive_lock_try "$LOCK_NAME"
        fi
        LOCK_RETVAL="${PIPESTATUS[0]}"

        if [ "$LOCK_RETVAL" -eq 0 ]
        then
            return 0
        fi

    done

    return "$LOCK_RETVAL"

}

function exclusive_lock_require() # [lockname] [retries] [delay]
{
    if ! exclusive_lock_retry "$@"
    then
        exit 1
    fi
}

For shell scripts, I tend to go with the mkdir over flock as it makes the locks more portable.

Either way, using set -e isn't enough. That only exits the script if any command fails. Your locks will still be left behind.

For proper lock cleanup, you really should set your traps to something like this psuedo code (lifted, simplified and untested but from actively used scripts) :

#=======================================================================
# Predefined Global Variables
#=======================================================================

TMPDIR=/tmp/myapp
[[ ! -d $TMP_DIR ]] \
    && mkdir -p $TMP_DIR \
    && chmod 700 $TMPDIR

LOCK_DIR=$TMP_DIR/lock

#=======================================================================
# Functions
#=======================================================================

function mklock {
    __lockdir="$LOCK_DIR/$(date +%s.%N).$$" # Private Global. Use Epoch.Nano.PID

    # If it can create $LOCK_DIR then no other instance is running
    if $(mkdir $LOCK_DIR)
    then
        mkdir $__lockdir  # create this instance's specific lock in queue
        LOCK_EXISTS=true  # Global
    else
        echo "FATAL: Lock already exists. Another copy is running or manually lock clean up required."
        exit 1001  # Or work out some sleep_while_execution_lock elsewhere
    fi
}

function rmlock {
    [[ ! -d $__lockdir ]] \
        && echo "WARNING: Lock is missing. $__lockdir does not exist" \
        || rmdir $__lockdir
}

#-----------------------------------------------------------------------
# Private Signal Traps Functions {{{2
#
# DANGER: SIGKILL cannot be trapped. So, try not to `kill -9 PID` or 
#         there will be *NO CLEAN UP*. You'll have to manually remove 
#         any locks in place.
#-----------------------------------------------------------------------
function __sig_exit {

    # Place your clean up logic here 

    # Remove the LOCK
    [[ -n $LOCK_EXISTS ]] && rmlock
}

function __sig_int {
    echo "WARNING: SIGINT caught"    
    exit 1002
}

function __sig_quit {
    echo "SIGQUIT caught"
    exit 1003
}

function __sig_term {
    echo "WARNING: SIGTERM caught"    
    exit 1015
}

#=======================================================================
# Main
#=======================================================================

# Set TRAPs
trap __sig_exit EXIT    # SIGEXIT
trap __sig_int INT      # SIGINT
trap __sig_quit QUIT    # SIGQUIT
trap __sig_term TERM    # SIGTERM

mklock

# CODE

exit # No need for cleanup code here being in the __sig_exit trap function

Here's what will happen. All traps will produce an exit so the function __sig_exit will always happen (barring a SIGKILL) which cleans up your locks.

Note: my exit values are not low values. Why? Various batch processing systems make or have expectations of the numbers 0 through 31. Setting them to something else, I can have my scripts and batch streams react accordingly to the previous batch job or script.


You need an atomic operation, like flock, else this will eventually fail.

But what to do if flock is not available. Well there is mkdir. That's an atomic operation too. Only one process will result in a successful mkdir, all others will fail.

So the code is:

if mkdir /var/lock/.myscript.exclusivelock
then
  # do stuff
  :
  rmdir /var/lock/.myscript.exclusivelock
fi

You need to take care of stale locks else aftr a crash your script will never run again.


There's a wrapper around the flock(2) system call called, unimaginatively, flock(1). This makes it relatively easy to reliably obtain exclusive locks without worrying about cleanup etc. There are examples on the man page as to how to use it in a shell script.


You can use GNU Parallel for this as it works as a mutex when called as sem. So, in concrete terms, you can use:

sem --id SCRIPTSINGLETON yourScript

If you want a timeout too, use:

sem --id SCRIPTSINGLETON --semaphoretimeout -10 yourScript

Timeout of <0 means exit without running script if semaphore is not released within the timeout, timeout of >0 mean run the script anyway.

Note that you should give it a name (with --id) else it defaults to the controlling terminal.

GNU Parallel is a very simple install on most Linux/OSX/Unix platforms - it is just a Perl script.


When targeting a Debian machine I find the lockfile-progs package to be a good solution. procmail also comes with a lockfile tool. However sometimes I am stuck with neither of these.

Here's my solution which uses mkdir for atomic-ness and a PID file to detect stale locks. This code is currently in production on a Cygwin setup and works well.

To use it simply call exclusive_lock_require when you need get exclusive access to something. An optional lock name parameter lets you share locks between different scripts. There's also two lower level functions (exclusive_lock_try and exclusive_lock_retry) should you need something more complex.

function exclusive_lock_try() # [lockname]
{

    local LOCK_NAME="${1:-`basename $0`}"

    LOCK_DIR="/tmp/.${LOCK_NAME}.lock"
    local LOCK_PID_FILE="${LOCK_DIR}/${LOCK_NAME}.pid"

    if [ -e "$LOCK_DIR" ]
    then
        local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
        if [ ! -z "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2> /dev/null
        then
            # locked by non-dead process
            echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
            return 1
        else
            # orphaned lock, take it over
            ( echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null && local LOCK_PID="$$"
        fi
    fi
    if [ "`trap -p EXIT`" != "" ]
    then
        # already have an EXIT trap
        echo "Cannot get lock, already have an EXIT trap"
        return 1
    fi
    if [ "$LOCK_PID" != "$$" ] &&
        ! ( umask 077 && mkdir "$LOCK_DIR" && umask 177 && echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null
    then
        local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
        # unable to acquire lock, new process got in first
        echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
        return 1
    fi
    trap "/bin/rm -rf \"$LOCK_DIR\"; exit;" EXIT

    return 0 # got lock

}

function exclusive_lock_retry() # [lockname] [retries] [delay]
{

    local LOCK_NAME="$1"
    local MAX_TRIES="${2:-5}"
    local DELAY="${3:-2}"

    local TRIES=0
    local LOCK_RETVAL

    while [ "$TRIES" -lt "$MAX_TRIES" ]
    do

        if [ "$TRIES" -gt 0 ]
        then
            sleep "$DELAY"
        fi
        local TRIES=$(( $TRIES + 1 ))

        if [ "$TRIES" -lt "$MAX_TRIES" ]
        then
            exclusive_lock_try "$LOCK_NAME" > /dev/null
        else
            exclusive_lock_try "$LOCK_NAME"
        fi
        LOCK_RETVAL="${PIPESTATUS[0]}"

        if [ "$LOCK_RETVAL" -eq 0 ]
        then
            return 0
        fi

    done

    return "$LOCK_RETVAL"

}

function exclusive_lock_require() # [lockname] [retries] [delay]
{
    if ! exclusive_lock_retry "$@"
    then
        exit 1
    fi
}

Here is a more elegant, fail-safe, quick & dirty method, combining the answers provided above.

Usage

  1. include sh_lock_functions.sh
  2. init using sh_lock_init
  3. lock using sh_acquire_lock
  4. check lock using sh_check_lock
  5. unlock using sh_remove_lock

Script File

sh_lock_functions.sh

#!/bin/bash

function sh_lock_init {
    sh_lock_scriptName=$(basename $0)
    sh_lock_dir="/tmp/${sh_lock_scriptName}.lock" #lock directory
    sh_lock_file="${sh_lock_dir}/lockPid.txt" #lock file
}

function sh_acquire_lock {
    if mkdir $sh_lock_dir 2>/dev/null; then #check for lock
        echo "$sh_lock_scriptName lock acquired successfully.">&2
        touch $sh_lock_file
        echo $$ > $sh_lock_file # set current pid in lockFile
        return 0
    else
        touch $sh_lock_file
        read sh_lock_lastPID < $sh_lock_file
        if [ ! -z "$sh_lock_lastPID" -a -d /proc/$sh_lock_lastPID ]; then # if lastPID is not null and a process with that pid exists
            echo "$sh_lock_scriptName is already running.">&2
            return 1
        else
            echo "$sh_lock_scriptName stopped during execution, reacquiring lock.">&2
            echo $$ > $sh_lock_file # set current pid in lockFile
            return 2
        fi
    fi
    return 0
}

function sh_check_lock {
    [[ ! -f $sh_lock_file ]] && echo "$sh_lock_scriptName lock file removed.">&2 && return 1
    read sh_lock_lastPID < $sh_lock_file
    [[ $sh_lock_lastPID -ne $$ ]] && echo "$sh_lock_scriptName lock file pid has changed.">&2  && return 2
    echo "$sh_lock_scriptName lock still in place.">&2
    return 0
}

function sh_remove_lock {
    rm -r $sh_lock_dir
}

Usage example

sh_lock_usage_example.sh

#!/bin/bash
. /path/to/sh_lock_functions.sh # load sh lock functions

sh_lock_init || exit $?

sh_acquire_lock
lockStatus=$?
[[ $lockStatus -eq 1 ]] && exit $lockStatus
[[ $lockStatus -eq 2 ]] && echo "lock is set, do some resume from crash procedures";

#monitoring example
cnt=0
while sh_check_lock # loop while lock is in place
do
    echo "$sh_scriptName running (pid $$)"
    sleep 1
    let cnt++
    [[ $cnt -gt 5 ]] && break
done

#remove lock when process finished
sh_remove_lock || exit $?

exit 0

Features

  • Uses a combination of file, directory and process id to lock to make sure that the process is not already running
  • You can detect if the script stopped before lock removal (eg. process kill, shutdown, error etc.)
  • You can check the lock file, and use it to trigger a process shutdown when the lock is missing
  • Verbose, outputs error messages for easier debug

For shell scripts, I tend to go with the mkdir over flock as it makes the locks more portable.

Either way, using set -e isn't enough. That only exits the script if any command fails. Your locks will still be left behind.

For proper lock cleanup, you really should set your traps to something like this psuedo code (lifted, simplified and untested but from actively used scripts) :

#=======================================================================
# Predefined Global Variables
#=======================================================================

TMPDIR=/tmp/myapp
[[ ! -d $TMP_DIR ]] \
    && mkdir -p $TMP_DIR \
    && chmod 700 $TMPDIR

LOCK_DIR=$TMP_DIR/lock

#=======================================================================
# Functions
#=======================================================================

function mklock {
    __lockdir="$LOCK_DIR/$(date +%s.%N).$$" # Private Global. Use Epoch.Nano.PID

    # If it can create $LOCK_DIR then no other instance is running
    if $(mkdir $LOCK_DIR)
    then
        mkdir $__lockdir  # create this instance's specific lock in queue
        LOCK_EXISTS=true  # Global
    else
        echo "FATAL: Lock already exists. Another copy is running or manually lock clean up required."
        exit 1001  # Or work out some sleep_while_execution_lock elsewhere
    fi
}

function rmlock {
    [[ ! -d $__lockdir ]] \
        && echo "WARNING: Lock is missing. $__lockdir does not exist" \
        || rmdir $__lockdir
}

#-----------------------------------------------------------------------
# Private Signal Traps Functions {{{2
#
# DANGER: SIGKILL cannot be trapped. So, try not to `kill -9 PID` or 
#         there will be *NO CLEAN UP*. You'll have to manually remove 
#         any locks in place.
#-----------------------------------------------------------------------
function __sig_exit {

    # Place your clean up logic here 

    # Remove the LOCK
    [[ -n $LOCK_EXISTS ]] && rmlock
}

function __sig_int {
    echo "WARNING: SIGINT caught"    
    exit 1002
}

function __sig_quit {
    echo "SIGQUIT caught"
    exit 1003
}

function __sig_term {
    echo "WARNING: SIGTERM caught"    
    exit 1015
}

#=======================================================================
# Main
#=======================================================================

# Set TRAPs
trap __sig_exit EXIT    # SIGEXIT
trap __sig_int INT      # SIGINT
trap __sig_quit QUIT    # SIGQUIT
trap __sig_term TERM    # SIGTERM

mklock

# CODE

exit # No need for cleanup code here being in the __sig_exit trap function

Here's what will happen. All traps will produce an exit so the function __sig_exit will always happen (barring a SIGKILL) which cleans up your locks.

Note: my exit values are not low values. Why? Various batch processing systems make or have expectations of the numbers 0 through 31. Setting them to something else, I can have my scripts and batch streams react accordingly to the previous batch job or script.


All approaches that test the existence of "lock files" are flawed.

Why? Because there is no way to check whether a file exists and create it in a single atomic action. Because of this; there is a race condition that WILL make your attempts at mutual exclusion break.

Instead, you need to use mkdir. mkdir creates a directory if it doesn't exist yet, and if it does, it sets an exit code. More importantly, it does all this in a single atomic action making it perfect for this scenario.

if ! mkdir /tmp/myscript.lock 2>/dev/null; then
    echo "Myscript is already running." >&2
    exit 1
fi

For all details, see the excellent BashFAQ: http://mywiki.wooledge.org/BashFAQ/045

If you want to take care of stale locks, fuser(1) comes in handy. The only downside here is that the operation takes about a second, so it isn't instant.

Here's a function I wrote once that solves the problem using fuser:

#       mutex file
#
# Open a mutual exclusion lock on the file, unless another process already owns one.
#
# If the file is already locked by another process, the operation fails.
# This function defines a lock on a file as having a file descriptor open to the file.
# This function uses FD 9 to open a lock on the file.  To release the lock, close FD 9:
# exec 9>&-
#
mutex() {
    local file=$1 pid pids 

    exec 9>>"$file"
    { pids=$(fuser -f "$file"); } 2>&- 9>&- 
    for pid in $pids; do
        [[ $pid = $$ ]] && continue

        exec 9>&- 
        return 1 # Locked by a pid.
    done 
}

You can use it in a script like so:

mutex /var/run/myscript.lock || { echo "Already running." >&2; exit 1; }

If you don't care about portability (these solutions should work on pretty much any UNIX box), Linux' fuser(1) offers some additional options and there is also flock(1).


Quick and dirty?

#!/bin/sh

if [ -f sometempfile ]
  echo "Already running... will now terminate."
  exit
else
  touch sometempfile
fi

..do what you want here..

rm sometempfile

Use flock(1) to make an exclusive scoped lock a on file descriptor. This way you can even synchronize different parts of the script.

#!/bin/bash

(
  # Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
  flock -x -w 10 200 || exit 1

  # Do stuff

) 200>/var/lock/.myscript.exclusivelock

This ensures that code between ( and ) is run only by one process at a time and that the process doesn’t wait too long for a lock.

Caveat: this particular command is a part of util-linux. If you run an operating system other than Linux, it may or may not be available.


I find that bmdhack's solution is the most practical, at least for my use case. Using flock and lockfile rely on removing the lockfile using rm when the script terminates, which can't always be guaranteed (e.g., kill -9).

I would change one minor thing about bmdhack's solution: It makes a point of removing the lock file, without stating that this is unnecessary for the safe working of this semaphore. His use of kill -0 ensures that an old lockfile for a dead process will simply be ignored/over-written.

My simplified solution is therefore to simply add the following to the top of your singleton:

## Test the lock
LOCKFILE=/tmp/singleton.lock 
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
    echo "Script already running. bye!"
    exit 
fi

## Set the lock 
echo $$ > ${LOCKFILE}

Of course, this script still has the flaw that processes that are likely to start at the same time have a race hazard, as the lock test and set operations are not a single atomic action. But the proposed solution for this by lhunath to use mkdir has the flaw that a killed script may leave behind the directory, thus preventing other instances from running.


Answered a million times already, but another way, without the need for external dependencies:

LOCK_FILE="/var/lock/$(basename "$0").pid"
trap "rm -f ${LOCK_FILE}; exit" INT TERM EXIT
if [[ -f $LOCK_FILE && -d /proc/`cat $LOCK_FILE` ]]; then
   // Process already exists
   exit 1
fi
echo $$ > $LOCK_FILE

Each time it writes the current PID ($$) into the lockfile and on script startup checks if a process is running with the latest PID.


Really quick and really dirty? This one-liner on the top of your script will work:

[[ $(pgrep -c "`basename \"$0\"`") -gt 1 ]] && exit

Of course, just make sure that your script name is unique. :)


Use flock(1) to make an exclusive scoped lock a on file descriptor. This way you can even synchronize different parts of the script.

#!/bin/bash

(
  # Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
  flock -x -w 10 200 || exit 1

  # Do stuff

) 200>/var/lock/.myscript.exclusivelock

This ensures that code between ( and ) is run only by one process at a time and that the process doesn’t wait too long for a lock.

Caveat: this particular command is a part of util-linux. If you run an operating system other than Linux, it may or may not be available.


Another option is to use shell's noclobber option by running set -C. Then > will fail if the file already exists.

In brief:

set -C
lockfile="/tmp/locktest.lock"
if echo "$$" > "$lockfile"; then
    echo "Successfully acquired lock"
    # do work
    rm "$lockfile"    # XXX or via trap - see below
else
    echo "Cannot acquire lock - already locked by $(cat "$lockfile")"
fi

This causes the shell to call:

open(pathname, O_CREAT|O_EXCL)

which atomically creates the file or fails if the file already exists.


According to a comment on BashFAQ 045, this may fail in ksh88, but it works in all my shells:

$ strace -e trace=creat,open -f /bin/bash /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_LARGEFILE, 0666) = 3

$ strace -e trace=creat,open -f /bin/zsh /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_NOCTTY|O_LARGEFILE, 0666) = 3

$ strace -e trace=creat,open -f /bin/pdksh /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_TRUNC|O_LARGEFILE, 0666) = 3

$ strace -e trace=creat,open -f /bin/dash /home/mikel/bin/testopen 2>&1 | grep -F testopen.lock
open("/tmp/testopen.lock", O_WRONLY|O_CREAT|O_EXCL|O_LARGEFILE, 0666) = 3

Interesting that pdksh adds the O_TRUNC flag, but obviously it's redundant:
either you're creating an empty file, or you're not doing anything.


How you do the rm depends on how you want unclean exits to be handled.

Delete on clean exit

New runs fail until the issue that caused the unclean exit to be resolved and the lockfile is manually removed.

# acquire lock
# do work (code here may call exit, etc.)
rm "$lockfile"

Delete on any exit

New runs succeed provided the script is not already running.

trap 'rm "$lockfile"' EXIT

I use a simple approach that handles stale lock files.

Note that some of the above solutions that store the pid, ignore the fact that the pid can wrap around. So - just checking if there is a valid process with the stored pid is not enough, especially for long running scripts.

I use noclobber to make sure only one script can open and write to the lock file at one time. Further, I store enough information to uniquely identify a process in the lockfile. I define the set of data to uniquely identify a process to be pid,ppid,lstart.

When a new script starts up, if it fails to create the lock file, it then verifies that the process that created the lock file is still around. If not, we assume the original process died an ungraceful death, and left a stale lock file. The new script then takes ownership of the lock file, and all is well the world, again.

Should work with multiple shells across multiple platforms. Fast, portable and simple.

#!/usr/bin/env sh
# Author: rouble

LOCKFILE=/var/tmp/lockfile #customize this line

trap release INT TERM EXIT

# Creates a lockfile. Sets global variable $ACQUIRED to true on success.
# 
# Returns 0 if it is successfully able to create lockfile.
acquire () {
    set -C #Shell noclobber option. If file exists, > will fail.
    UUID=`ps -eo pid,ppid,lstart $$ | tail -1`
    if (echo "$UUID" > "$LOCKFILE") 2>/dev/null; then
        ACQUIRED="TRUE"
        return 0
    else
        if [ -e $LOCKFILE ]; then 
            # We may be dealing with a stale lock file.
            # Bring out the magnifying glass. 
            CURRENT_UUID_FROM_LOCKFILE=`cat $LOCKFILE`
            CURRENT_PID_FROM_LOCKFILE=`cat $LOCKFILE | cut -f 1 -d " "`
            CURRENT_UUID_FROM_PS=`ps -eo pid,ppid,lstart $CURRENT_PID_FROM_LOCKFILE | tail -1`
            if [ "$CURRENT_UUID_FROM_LOCKFILE" == "$CURRENT_UUID_FROM_PS" ]; then 
                echo "Script already running with following identification: $CURRENT_UUID_FROM_LOCKFILE" >&2
                return 1
            else
                # The process that created this lock file died an ungraceful death. 
                # Take ownership of the lock file.
                echo "The process $CURRENT_UUID_FROM_LOCKFILE is no longer around. Taking ownership of $LOCKFILE"
                release "FORCE"
                if (echo "$UUID" > "$LOCKFILE") 2>/dev/null; then
                    ACQUIRED="TRUE"
                    return 0
                else
                    echo "Cannot write to $LOCKFILE. Error." >&2
                    return 1
                fi
            fi
        else
            echo "Do you have write permissons to $LOCKFILE ?" >&2
            return 1
        fi
    fi
}

# Removes the lock file only if this script created it ($ACQUIRED is set), 
# OR, if we are removing a stale lock file (first parameter is "FORCE") 
release () {
    #Destroy lock file. Take no prisoners.
    if [ "$ACQUIRED" ] || [ "$1" == "FORCE" ]; then
        rm -f $LOCKFILE
    fi
}

# Test code
# int main( int argc, const char* argv[] )
echo "Acquring lock."
acquire
if [ $? -eq 0 ]; then 
    echo "Acquired lock."
    read -p "Press [Enter] key to release lock..."
    release
    echo "Released lock."
else
    echo "Unable to acquire lock."
fi

Add this line at the beginning of your script

[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :

It's a boilerplate code from man flock.

If you want more logging, use this one

[ "${FLOCKER}" != "$0" ] && { echo "Trying to start build from queue... "; exec bash -c "FLOCKER='$0' flock -E $E_LOCKED -en '$0' '$0' '$@' || if [ \"\$?\" -eq $E_LOCKED ]; then echo 'Locked.'; fi"; } || echo "Lock is free. Completing."

This sets and checks locks using flock utility. This code detects if it was run first time by checking FLOCKER variable, if it is not set to script name, then it tries to start script again recursively using flock and with FLOCKER variable initialized, if FLOCKER is set correctly, then flock on previous iteration succeeded and it is OK to proceed. If lock is busy, it fails with configurable exit code.

It seems to not work on Debian 7, but seems to work back again with experimental util-linux 2.25 package. It writes "flock: ... Text file busy". It could be overridden by disabling write permission on your script.


This will work, if your script name is unique:

#!/bin/bash
if [ $(pgrep -c $(basename $0)) -gt 1 ]; then 
  echo $(basename $0) is already running
  exit 0
fi

If the scriptname is not unique, this works on most linux distributions:

#!/bin/bash
exec 9>/tmp/my_lock_file
if ! flock -n 9  ; then
   echo "another instance of this script is already running";
   exit 1
fi

source: http://mywiki.wooledge.org/BashFAQ/045


There's a wrapper around the flock(2) system call called, unimaginatively, flock(1). This makes it relatively easy to reliably obtain exclusive locks without worrying about cleanup etc. There are examples on the man page as to how to use it in a shell script.


The existing answers posted either rely on the CLI utility flock or do not properly secure the lock file. The flock utility is not available on all non-Linux systems (i.e. FreeBSD), and does not work properly on NFS.

In my early days of system administration and system development, I was told that a safe and relatively portable method of creating a lock file was to create a temp file using mkemp(3) or mkemp(1), write identifying information to the temp file (i.e. PID), then hard link the temp file to the lock file. If the link was successful, then you have successfully obtained the lock.

When using locks in shell scripts, I typically place an obtain_lock() function in a shared profile and then source it from the scripts. Below is an example of my lock function:

obtain_lock()
{
  LOCK="${1}"
  LOCKDIR="$(dirname "${LOCK}")"
  LOCKFILE="$(basename "${LOCK}")"

  # create temp lock file
  TMPLOCK=$(mktemp -p "${LOCKDIR}" "${LOCKFILE}XXXXXX" 2> /dev/null)
  if test "x${TMPLOCK}" == "x";then
     echo "unable to create temporary file with mktemp" 1>&2
     return 1
  fi
  echo "$$" > "${TMPLOCK}"

  # attempt to obtain lock file
  ln "${TMPLOCK}" "${LOCK}" 2> /dev/null
  if test $? -ne 0;then
     rm -f "${TMPLOCK}"
     echo "unable to obtain lockfile" 1>&2
     if test -f "${LOCK}";then
        echo "current lock information held by: $(cat "${LOCK}")" 1>&2
     fi
     return 2
  fi
  rm -f "${TMPLOCK}"

  return 0;
};

The following is an example of how to use the lock function:

#!/bin/sh

. /path/to/locking/profile.sh
PROG_LOCKFILE="/tmp/myprog.lock"

clean_up()
{
  rm -f "${PROG_LOCKFILE}"
}

obtain_lock "${PROG_LOCKFILE}"
if test $? -ne 0;then
   exit 1
fi
trap clean_up SIGHUP SIGINT SIGTERM

# bulk of script

clean_up
exit 0
# end of script

Remember to call clean_up at any exit points in your script.

I've used the above in both Linux and FreeBSD environments.


The flock path is the way to go. Think about what happens when the script suddenly dies. In the flock-case you just loose the flock, but that is not a problem. Also, note that an evil trick is to take a flock on the script itself .. but that of course lets you run full-steam-ahead into permission problems.


Quick and dirty?

#!/bin/sh

if [ -f sometempfile ]
  echo "Already running... will now terminate."
  exit
else
  touch sometempfile
fi

..do what you want here..

rm sometempfile

Quick and dirty?

#!/bin/sh

if [ -f sometempfile ]
  echo "Already running... will now terminate."
  exit
else
  touch sometempfile
fi

..do what you want here..

rm sometempfile

I wanted to do away with lockfiles, lockdirs, special locking programs and even pidof since it isn't found on all Linux installations. Also wanted to have the simplest code possible (or at least as few lines as possible). Simplest if statement, in one line:

if [[ $(ps axf | awk -v pid=$$ '$1!=pid && $6~/'$(basename $0)'/{print $1}') ]]; then echo "Already running"; exit; fi

Late to the party, using the idea from @Majal, this is my script to start only one instance of emacsclient GUI. With it, I can set shortcutkey to open or jump back to the same emacsclient. I have another script to call emacsclient in terminals when I need it. The use of emacsclient here is just to show a working example, one can choose something else. This approach is quick and good enough for my tiny scripts. Tell me where it is dirty :)

#!/bin/bash

# if [ $(pgrep -c $(basename $0)) -lt 2 ]; then # this works but requires script name to be unique
if [ $(pidof -x "$0"|wc -w ) -lt 3 ]; then
    echo -e "Starting $(basename $0)"
    emacsclient --alternate-editor="" -c "$@"
else
    echo -e "$0 is running already"
fi

Really quick and really dirty? This one-liner on the top of your script will work:

[[ $(pgrep -c "`basename \"$0\"`") -gt 1 ]] && exit

Of course, just make sure that your script name is unique. :)


You need an atomic operation, like flock, else this will eventually fail.

But what to do if flock is not available. Well there is mkdir. That's an atomic operation too. Only one process will result in a successful mkdir, all others will fail.

So the code is:

if mkdir /var/lock/.myscript.exclusivelock
then
  # do stuff
  :
  rmdir /var/lock/.myscript.exclusivelock
fi

You need to take care of stale locks else aftr a crash your script will never run again.


All approaches that test the existence of "lock files" are flawed.

Why? Because there is no way to check whether a file exists and create it in a single atomic action. Because of this; there is a race condition that WILL make your attempts at mutual exclusion break.

Instead, you need to use mkdir. mkdir creates a directory if it doesn't exist yet, and if it does, it sets an exit code. More importantly, it does all this in a single atomic action making it perfect for this scenario.

if ! mkdir /tmp/myscript.lock 2>/dev/null; then
    echo "Myscript is already running." >&2
    exit 1
fi

For all details, see the excellent BashFAQ: http://mywiki.wooledge.org/BashFAQ/045

If you want to take care of stale locks, fuser(1) comes in handy. The only downside here is that the operation takes about a second, so it isn't instant.

Here's a function I wrote once that solves the problem using fuser:

#       mutex file
#
# Open a mutual exclusion lock on the file, unless another process already owns one.
#
# If the file is already locked by another process, the operation fails.
# This function defines a lock on a file as having a file descriptor open to the file.
# This function uses FD 9 to open a lock on the file.  To release the lock, close FD 9:
# exec 9>&-
#
mutex() {
    local file=$1 pid pids 

    exec 9>>"$file"
    { pids=$(fuser -f "$file"); } 2>&- 9>&- 
    for pid in $pids; do
        [[ $pid = $$ ]] && continue

        exec 9>&- 
        return 1 # Locked by a pid.
    done 
}

You can use it in a script like so:

mutex /var/run/myscript.lock || { echo "Already running." >&2; exit 1; }

If you don't care about portability (these solutions should work on pretty much any UNIX box), Linux' fuser(1) offers some additional options and there is also flock(1).


An example with flock(1) but without subshell. flock()ed file /tmp/foo is never removed, but that doesn't matter as it gets flock() and un-flock()ed.

#!/bin/bash

exec 9<> /tmp/foo
flock -n 9
RET=$?
if [[ $RET -ne 0 ]] ; then
    echo "lock failed, exiting"
    exit
fi

#Now we are inside the "critical section"
echo "inside lock"
sleep 5
exec 9>&- #close fd 9, and release lock

#The part below is outside the critical section (the lock)
echo "lock released"
sleep 5

Actually although the answer of bmdhacks is almost good, there is a slight chance the second script to run after first checked the lockfile and before it wrote it. So they both will write the lock file and they will both be running. Here is how to make it work for sure:

lockfile=/var/lock/myscript.lock

if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null ; then
  trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
else
  # or you can decide to skip the "else" part if you want
  echo "Another instance is already running!"
fi

The noclobber option will make sure that redirect command will fail if file already exists. So the redirect command is actually atomic - you write and check the file with one command. You don't need to remove the lockfile at the end of file - it'll be removed by the trap. I hope this helps to people that will read it later.

P.S. I didn't see that Mikel already answered the question correctly, although he didn't include the trap command to reduce the chance the lock file will be left over after stopping the script with Ctrl-C for example. So this is the complete solution


Add this line at the beginning of your script

[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :

It's a boilerplate code from man flock.

If you want more logging, use this one

[ "${FLOCKER}" != "$0" ] && { echo "Trying to start build from queue... "; exec bash -c "FLOCKER='$0' flock -E $E_LOCKED -en '$0' '$0' '$@' || if [ \"\$?\" -eq $E_LOCKED ]; then echo 'Locked.'; fi"; } || echo "Lock is free. Completing."

This sets and checks locks using flock utility. This code detects if it was run first time by checking FLOCKER variable, if it is not set to script name, then it tries to start script again recursively using flock and with FLOCKER variable initialized, if FLOCKER is set correctly, then flock on previous iteration succeeded and it is OK to proceed. If lock is busy, it fails with configurable exit code.

It seems to not work on Debian 7, but seems to work back again with experimental util-linux 2.25 package. It writes "flock: ... Text file busy". It could be overridden by disabling write permission on your script.


Some unixes have lockfile which is very similar to the already mentioned flock.

From the manpage:

lockfile can be used to create one or more semaphore files. If lock- file can't create all the specified files (in the specified order), it waits sleeptime (defaults to 8) seconds and retries the last file that didn't succeed. You can specify the number of retries to do until failure is returned. If the number of retries is -1 (default, i.e., -r-1) lockfile will retry forever.


why dont we use something like

pgrep -f $cmd || $cmd

Take a look to FLOM (Free LOck Manager) http://sourceforge.net/projects/flom/: you can synchronize commands and/or scripts using abstract resources that does not need lock files in a filesystem. You can synchronize commands running in different systems without a NAS (Network Attached Storage) like an NFS (Network File System) server.

Using the simplest use case, serializing "command1" and "command2" may be as easy as executing:

flom -- command1

and

flom -- command2

from two different shell scripts.


Some unixes have lockfile which is very similar to the already mentioned flock.

From the manpage:

lockfile can be used to create one or more semaphore files. If lock- file can't create all the specified files (in the specified order), it waits sleeptime (defaults to 8) seconds and retries the last file that didn't succeed. You can specify the number of retries to do until failure is returned. If the number of retries is -1 (default, i.e., -r-1) lockfile will retry forever.


Create a lock file in a known location and check for existence on script start? Putting the PID in the file might be helpful if someone's attempting to track down an errant instance that's preventing execution of the script.


I have a simple solution based on the file name

#!/bin/bash

MY_FILENAME=`basename "$BASH_SOURCE"`

MY_PROCESS_COUNT=$(ps a -o pid,cmd | grep $MY_FILENAME | grep -v grep | grep -v $$ | wc -
l)

if [ $MY_PROCESS_COUNT -ne 0  ]; then
  echo found another process
  exit 0
if

# Follows the code to get the job done.

Some unixes have lockfile which is very similar to the already mentioned flock.

From the manpage:

lockfile can be used to create one or more semaphore files. If lock- file can't create all the specified files (in the specified order), it waits sleeptime (defaults to 8) seconds and retries the last file that didn't succeed. You can specify the number of retries to do until failure is returned. If the number of retries is -1 (default, i.e., -r-1) lockfile will retry forever.


I find that bmdhack's solution is the most practical, at least for my use case. Using flock and lockfile rely on removing the lockfile using rm when the script terminates, which can't always be guaranteed (e.g., kill -9).

I would change one minor thing about bmdhack's solution: It makes a point of removing the lock file, without stating that this is unnecessary for the safe working of this semaphore. His use of kill -0 ensures that an old lockfile for a dead process will simply be ignored/over-written.

My simplified solution is therefore to simply add the following to the top of your singleton:

## Test the lock
LOCKFILE=/tmp/singleton.lock 
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
    echo "Script already running. bye!"
    exit 
fi

## Set the lock 
echo $$ > ${LOCKFILE}

Of course, this script still has the flaw that processes that are likely to start at the same time have a race hazard, as the lock test and set operations are not a single atomic action. But the proposed solution for this by lhunath to use mkdir has the flaw that a killed script may leave behind the directory, thus preventing other instances from running.


This example is explained in the man flock, but it needs some impovements, because we should manage bugs and exit codes:

   #!/bin/bash
   #set -e this is useful only for very stupid scripts because script fails when anything command exits with status more than 0 !! without possibility for capture exit codes. not all commands exits >0 are failed.

( #start subprocess
  # Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
  flock -x -w 10 200
  if [ "$?" != "0" ]; then echo Cannot lock!; exit 1; fi
  echo $$>>/var/lock/.myscript.exclusivelock #for backward lockdir compatibility, notice this command is executed AFTER command bottom  ) 200>/var/lock/.myscript.exclusivelock.
  # Do stuff
  # you can properly manage exit codes with multiple command and process algorithm.
  # I suggest throw this all to external procedure than can properly handle exit X commands

) 200>/var/lock/.myscript.exclusivelock   #exit subprocess

FLOCKEXIT=$?  #save exitcode status
    #do some finish commands

exit $FLOCKEXIT   #return properly exitcode, may be usefull inside external scripts

You can use another method, list processes that I used in the past. But this is more complicated that method above. You should list processes by ps, filter by its name, additional filter grep -v grep for remove parasite nad finally count it by grep -c . and compare with number. Its complicated and uncertain


To make locking reliable you need an atomic operation. Many of the above proposals are not atomic. The proposed lockfile(1) utility looks promising as the man-page mentioned, that its "NFS-resistant". If your OS does not support lockfile(1) and your solution has to work on NFS, you have not many options....

NFSv2 has two atomic operations:

  • symlink
  • rename

With NFSv3 the create call is also atomic.

Directory operations are NOT atomic under NFSv2 and NFSv3 (please refer to the book 'NFS Illustrated' by Brent Callaghan, ISBN 0-201-32570-5; Brent is a NFS-veteran at Sun).

Knowing this, you can implement spin-locks for files and directories (in shell, not PHP):

lock current dir:

while ! ln -s . lock; do :; done

lock a file:

while ! ln -s ${f} ${f}.lock; do :; done

unlock current dir (assumption, the running process really acquired the lock):

mv lock deleteme && rm deleteme

unlock a file (assumption, the running process really acquired the lock):

mv ${f}.lock ${f}.deleteme && rm ${f}.deleteme

Remove is also not atomic, therefore first the rename (which is atomic) and then the remove.

For the symlink and rename calls, both filenames have to reside on the same filesystem. My proposal: use only simple filenames (no paths) and put file and lock into the same directory.


Create a lock file in a known location and check for existence on script start? Putting the PID in the file might be helpful if someone's attempting to track down an errant instance that's preventing execution of the script.


Here's an approach that combines atomic directory locking with a check for stale lock via PID and restart if stale. Also, this does not rely on any bashisms.

#!/bin/dash

SCRIPTNAME=$(basename $0)
LOCKDIR="/var/lock/${SCRIPTNAME}"
PIDFILE="${LOCKDIR}/pid"

if ! mkdir $LOCKDIR 2>/dev/null
then
    # lock failed, but check for stale one by checking if the PID is really existing
    PID=$(cat $PIDFILE)
    if ! kill -0 $PID 2>/dev/null
    then
       echo "Removing stale lock of nonexistent PID ${PID}" >&2
       rm -rf $LOCKDIR
       echo "Restarting myself (${SCRIPTNAME})" >&2
       exec "$0" "$@"
    fi
    echo "$SCRIPTNAME is already running, bailing out" >&2
    exit 1
else
    # lock successfully acquired, save PID
    echo $$ > $PIDFILE
fi

trap "rm -rf ${LOCKDIR}" QUIT INT TERM EXIT


echo hello

sleep 30s

echo bye

PID and lockfiles are definitely the most reliable. When you attempt to run the program, it can check for the lockfile which and if it exists, it can use ps to see if the process is still running. If it's not, the script can start, updating the PID in the lockfile to its own.


Try something like the below,

ab=`ps -ef | grep -v grep | grep -wc processname`

Then match the variable with 1 using an if loop.


This one line answer comes from someone related Ask Ubuntu Q&A:

[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :
#     This is useful boilerplate code for shell scripts.  Put it at the top  of
#     the  shell script you want to lock and it'll automatically lock itself on
#     the first run.  If the env var $FLOCKER is not set to  the  shell  script
#     that  is being run, then execute flock and grab an exclusive non-blocking
#     lock (using the script itself as the lock file) before re-execing  itself
#     with  the right arguments.  It also sets the FLOCKER env var to the right
#     value so it doesn't run again.

if [ 1 -ne $(/bin/fuser "$0" 2>/dev/null | wc -w) ]; then
    exit 1
fi

The existing answers posted either rely on the CLI utility flock or do not properly secure the lock file. The flock utility is not available on all non-Linux systems (i.e. FreeBSD), and does not work properly on NFS.

In my early days of system administration and system development, I was told that a safe and relatively portable method of creating a lock file was to create a temp file using mkemp(3) or mkemp(1), write identifying information to the temp file (i.e. PID), then hard link the temp file to the lock file. If the link was successful, then you have successfully obtained the lock.

When using locks in shell scripts, I typically place an obtain_lock() function in a shared profile and then source it from the scripts. Below is an example of my lock function:

obtain_lock()
{
  LOCK="${1}"
  LOCKDIR="$(dirname "${LOCK}")"
  LOCKFILE="$(basename "${LOCK}")"

  # create temp lock file
  TMPLOCK=$(mktemp -p "${LOCKDIR}" "${LOCKFILE}XXXXXX" 2> /dev/null)
  if test "x${TMPLOCK}" == "x";then
     echo "unable to create temporary file with mktemp" 1>&2
     return 1
  fi
  echo "$$" > "${TMPLOCK}"

  # attempt to obtain lock file
  ln "${TMPLOCK}" "${LOCK}" 2> /dev/null
  if test $? -ne 0;then
     rm -f "${TMPLOCK}"
     echo "unable to obtain lockfile" 1>&2
     if test -f "${LOCK}";then
        echo "current lock information held by: $(cat "${LOCK}")" 1>&2
     fi
     return 2
  fi
  rm -f "${TMPLOCK}"

  return 0;
};

The following is an example of how to use the lock function:

#!/bin/sh

. /path/to/locking/profile.sh
PROG_LOCKFILE="/tmp/myprog.lock"

clean_up()
{
  rm -f "${PROG_LOCKFILE}"
}

obtain_lock "${PROG_LOCKFILE}"
if test $? -ne 0;then
   exit 1
fi
trap clean_up SIGHUP SIGINT SIGTERM

# bulk of script

clean_up
exit 0
# end of script

Remember to call clean_up at any exit points in your script.

I've used the above in both Linux and FreeBSD environments.


There's a wrapper around the flock(2) system call called, unimaginatively, flock(1). This makes it relatively easy to reliably obtain exclusive locks without worrying about cleanup etc. There are examples on the man page as to how to use it in a shell script.


Take a look to FLOM (Free LOck Manager) http://sourceforge.net/projects/flom/: you can synchronize commands and/or scripts using abstract resources that does not need lock files in a filesystem. You can synchronize commands running in different systems without a NAS (Network Attached Storage) like an NFS (Network File System) server.

Using the simplest use case, serializing "command1" and "command2" may be as easy as executing:

flom -- command1

and

flom -- command2

from two different shell scripts.


This I have not found mentioned anywhere, it uses read, I don't exactly know if read is actually atomic but it has served me well so far..., it is juicy because it is only bash builtins, this is an in process implementation, you start the locker coprocess and use its i/o to manage locks, the same can be done interprocess by just swapping the target i/o from the locker file descriptors to a on filesystem file descriptor (exec 3<>/file && exec 4</file)

## gives locks
locker() {
    locked=false
    while read l; do
        case "$l" in
            lock)
                if $locked; then
                    echo false
                else
                    locked=true
                    echo true
                fi
                ;;
            unlock)
                if $locked; then
                    locked=false
                    echo true
                else
                    echo false
                fi
                ;;
            *)
                echo false
                ;;
        esac
    done
}
## locks
lock() {
    local response
    echo lock >&${locker[1]}
    read -ru ${locker[0]} response
    $response && return 0 || return 1
}

## unlocks
unlock() {
    local response
    echo unlock >&${locker[1]}
    read -ru ${locker[0]} response
    $response && return 0 || return 1
}

You can use GNU Parallel for this as it works as a mutex when called as sem. So, in concrete terms, you can use:

sem --id SCRIPTSINGLETON yourScript

If you want a timeout too, use:

sem --id SCRIPTSINGLETON --semaphoretimeout -10 yourScript

Timeout of <0 means exit without running script if semaphore is not released within the timeout, timeout of >0 mean run the script anyway.

Note that you should give it a name (with --id) else it defaults to the controlling terminal.

GNU Parallel is a very simple install on most Linux/OSX/Unix platforms - it is just a Perl script.


If flock's limitations, which have already been described elsewhere on this thread, aren't an issue for you, then this should work:

#!/bin/bash

{
    # exit if we are unable to obtain a lock; this would happen if 
    # the script is already running elsewhere
    # note: -x (exclusive) is the default
    flock -n 100 || exit

    # put commands to run here
    sleep 100
} 100>/tmp/myjob.lock 

Create a lock file in a known location and check for existence on script start? Putting the PID in the file might be helpful if someone's attempting to track down an errant instance that's preventing execution of the script.


To make locking reliable you need an atomic operation. Many of the above proposals are not atomic. The proposed lockfile(1) utility looks promising as the man-page mentioned, that its "NFS-resistant". If your OS does not support lockfile(1) and your solution has to work on NFS, you have not many options....

NFSv2 has two atomic operations:

  • symlink
  • rename

With NFSv3 the create call is also atomic.

Directory operations are NOT atomic under NFSv2 and NFSv3 (please refer to the book 'NFS Illustrated' by Brent Callaghan, ISBN 0-201-32570-5; Brent is a NFS-veteran at Sun).

Knowing this, you can implement spin-locks for files and directories (in shell, not PHP):

lock current dir:

while ! ln -s . lock; do :; done

lock a file:

while ! ln -s ${f} ${f}.lock; do :; done

unlock current dir (assumption, the running process really acquired the lock):

mv lock deleteme && rm deleteme

unlock a file (assumption, the running process really acquired the lock):

mv ${f}.lock ${f}.deleteme && rm ${f}.deleteme

Remove is also not atomic, therefore first the rename (which is atomic) and then the remove.

For the symlink and rename calls, both filenames have to reside on the same filesystem. My proposal: use only simple filenames (no paths) and put file and lock into the same directory.


I wanted to do away with lockfiles, lockdirs, special locking programs and even pidof since it isn't found on all Linux installations. Also wanted to have the simplest code possible (or at least as few lines as possible). Simplest if statement, in one line:

if [[ $(ps axf | awk -v pid=$$ '$1!=pid && $6~/'$(basename $0)'/{print $1}') ]]; then echo "Already running"; exit; fi

Some unixes have lockfile which is very similar to the already mentioned flock.

From the manpage:

lockfile can be used to create one or more semaphore files. If lock- file can't create all the specified files (in the specified order), it waits sleeptime (defaults to 8) seconds and retries the last file that didn't succeed. You can specify the number of retries to do until failure is returned. If the number of retries is -1 (default, i.e., -r-1) lockfile will retry forever.


The flock path is the way to go. Think about what happens when the script suddenly dies. In the flock-case you just loose the flock, but that is not a problem. Also, note that an evil trick is to take a flock on the script itself .. but that of course lets you run full-steam-ahead into permission problems.


Here is a more elegant, fail-safe, quick & dirty method, combining the answers provided above.

Usage

  1. include sh_lock_functions.sh
  2. init using sh_lock_init
  3. lock using sh_acquire_lock
  4. check lock using sh_check_lock
  5. unlock using sh_remove_lock

Script File

sh_lock_functions.sh

#!/bin/bash

function sh_lock_init {
    sh_lock_scriptName=$(basename $0)
    sh_lock_dir="/tmp/${sh_lock_scriptName}.lock" #lock directory
    sh_lock_file="${sh_lock_dir}/lockPid.txt" #lock file
}

function sh_acquire_lock {
    if mkdir $sh_lock_dir 2>/dev/null; then #check for lock
        echo "$sh_lock_scriptName lock acquired successfully.">&2
        touch $sh_lock_file
        echo $$ > $sh_lock_file # set current pid in lockFile
        return 0
    else
        touch $sh_lock_file
        read sh_lock_lastPID < $sh_lock_file
        if [ ! -z "$sh_lock_lastPID" -a -d /proc/$sh_lock_lastPID ]; then # if lastPID is not null and a process with that pid exists
            echo "$sh_lock_scriptName is already running.">&2
            return 1
        else
            echo "$sh_lock_scriptName stopped during execution, reacquiring lock.">&2
            echo $$ > $sh_lock_file # set current pid in lockFile
            return 2
        fi
    fi
    return 0
}

function sh_check_lock {
    [[ ! -f $sh_lock_file ]] && echo "$sh_lock_scriptName lock file removed.">&2 && return 1
    read sh_lock_lastPID < $sh_lock_file
    [[ $sh_lock_lastPID -ne $$ ]] && echo "$sh_lock_scriptName lock file pid has changed.">&2  && return 2
    echo "$sh_lock_scriptName lock still in place.">&2
    return 0
}

function sh_remove_lock {
    rm -r $sh_lock_dir
}

Usage example

sh_lock_usage_example.sh

#!/bin/bash
. /path/to/sh_lock_functions.sh # load sh lock functions

sh_lock_init || exit $?

sh_acquire_lock
lockStatus=$?
[[ $lockStatus -eq 1 ]] && exit $lockStatus
[[ $lockStatus -eq 2 ]] && echo "lock is set, do some resume from crash procedures";

#monitoring example
cnt=0
while sh_check_lock # loop while lock is in place
do
    echo "$sh_scriptName running (pid $$)"
    sleep 1
    let cnt++
    [[ $cnt -gt 5 ]] && break
done

#remove lock when process finished
sh_remove_lock || exit $?

exit 0

Features

  • Uses a combination of file, directory and process id to lock to make sure that the process is not already running
  • You can detect if the script stopped before lock removal (eg. process kill, shutdown, error etc.)
  • You can check the lock file, and use it to trigger a process shutdown when the lock is missing
  • Verbose, outputs error messages for easier debug

PID and lockfiles are definitely the most reliable. When you attempt to run the program, it can check for the lockfile which and if it exists, it can use ps to see if the process is still running. If it's not, the script can start, updating the PID in the lockfile to its own.


There's a wrapper around the flock(2) system call called, unimaginatively, flock(1). This makes it relatively easy to reliably obtain exclusive locks without worrying about cleanup etc. There are examples on the man page as to how to use it in a shell script.


if [ 1 -ne $(/bin/fuser "$0" 2>/dev/null | wc -w) ]; then
    exit 1
fi

why dont we use something like

pgrep -f $cmd || $cmd

The semaphoric utility uses flock (as discussed above, e.g. by presto8) to implement a counting semaphore. It enables any specific number of concurrent processes you want. We use it to limit the level of concurrency of various queue worker processes.

It's like sem but much lighter-weight. (Full disclosure: I wrote it after finding the sem was way too heavy for our needs and there wasn't a simple counting semaphore utility available.)


Using the process's lock is much stronger and takes care of the ungraceful exits also. lock_file is kept open as long as the process is running. It will be closed (by shell) once the process exists (even if it gets killed). I found this to be very efficient:

lock_file=/tmp/`basename $0`.lock

if fuser $lock_file > /dev/null 2>&1; then
    echo "WARNING: Other instance of $(basename $0) running."
    exit 1
fi
exec 3> $lock_file 

PID and lockfiles are definitely the most reliable. When you attempt to run the program, it can check for the lockfile which and if it exists, it can use ps to see if the process is still running. If it's not, the script can start, updating the PID in the lockfile to its own.


This I have not found mentioned anywhere, it uses read, I don't exactly know if read is actually atomic but it has served me well so far..., it is juicy because it is only bash builtins, this is an in process implementation, you start the locker coprocess and use its i/o to manage locks, the same can be done interprocess by just swapping the target i/o from the locker file descriptors to a on filesystem file descriptor (exec 3<>/file && exec 4</file)

## gives locks
locker() {
    locked=false
    while read l; do
        case "$l" in
            lock)
                if $locked; then
                    echo false
                else
                    locked=true
                    echo true
                fi
                ;;
            unlock)
                if $locked; then
                    locked=false
                    echo true
                else
                    echo false
                fi
                ;;
            *)
                echo false
                ;;
        esac
    done
}
## locks
lock() {
    local response
    echo lock >&${locker[1]}
    read -ru ${locker[0]} response
    $response && return 0 || return 1
}

## unlocks
unlock() {
    local response
    echo unlock >&${locker[1]}
    read -ru ${locker[0]} response
    $response && return 0 || return 1
}

The semaphoric utility uses flock (as discussed above, e.g. by presto8) to implement a counting semaphore. It enables any specific number of concurrent processes you want. We use it to limit the level of concurrency of various queue worker processes.

It's like sem but much lighter-weight. (Full disclosure: I wrote it after finding the sem was way too heavy for our needs and there wasn't a simple counting semaphore utility available.)


Answered a million times already, but another way, without the need for external dependencies:

LOCK_FILE="/var/lock/$(basename "$0").pid"
trap "rm -f ${LOCK_FILE}; exit" INT TERM EXIT
if [[ -f $LOCK_FILE && -d /proc/`cat $LOCK_FILE` ]]; then
   // Process already exists
   exit 1
fi
echo $$ > $LOCK_FILE

Each time it writes the current PID ($$) into the lockfile and on script startup checks if a process is running with the latest PID.


This example is explained in the man flock, but it needs some impovements, because we should manage bugs and exit codes:

   #!/bin/bash
   #set -e this is useful only for very stupid scripts because script fails when anything command exits with status more than 0 !! without possibility for capture exit codes. not all commands exits >0 are failed.

( #start subprocess
  # Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
  flock -x -w 10 200
  if [ "$?" != "0" ]; then echo Cannot lock!; exit 1; fi
  echo $$>>/var/lock/.myscript.exclusivelock #for backward lockdir compatibility, notice this command is executed AFTER command bottom  ) 200>/var/lock/.myscript.exclusivelock.
  # Do stuff
  # you can properly manage exit codes with multiple command and process algorithm.
  # I suggest throw this all to external procedure than can properly handle exit X commands

) 200>/var/lock/.myscript.exclusivelock   #exit subprocess

FLOCKEXIT=$?  #save exitcode status
    #do some finish commands

exit $FLOCKEXIT   #return properly exitcode, may be usefull inside external scripts

You can use another method, list processes that I used in the past. But this is more complicated that method above. You should list processes by ps, filter by its name, additional filter grep -v grep for remove parasite nad finally count it by grep -c . and compare with number. Its complicated and uncertain


Here's an approach that combines atomic directory locking with a check for stale lock via PID and restart if stale. Also, this does not rely on any bashisms.

#!/bin/dash

SCRIPTNAME=$(basename $0)
LOCKDIR="/var/lock/${SCRIPTNAME}"
PIDFILE="${LOCKDIR}/pid"

if ! mkdir $LOCKDIR 2>/dev/null
then
    # lock failed, but check for stale one by checking if the PID is really existing
    PID=$(cat $PIDFILE)
    if ! kill -0 $PID 2>/dev/null
    then
       echo "Removing stale lock of nonexistent PID ${PID}" >&2
       rm -rf $LOCKDIR
       echo "Restarting myself (${SCRIPTNAME})" >&2
       exec "$0" "$@"
    fi
    echo "$SCRIPTNAME is already running, bailing out" >&2
    exit 1
else
    # lock successfully acquired, save PID
    echo $$ > $PIDFILE
fi

trap "rm -rf ${LOCKDIR}" QUIT INT TERM EXIT


echo hello

sleep 30s

echo bye

I use oneliner @ the very beginning of script:

#!/bin/bash

if [[ $(pgrep -afc "$(basename "$0")") -gt "1" ]]; then echo "Another instance of "$0" has already been started!" && exit; fi
.
the_beginning_of_actual_script

It is good to see the presence of process in the memory (no matter what the status of process is); but it does the job for me.


Actually although the answer of bmdhacks is almost good, there is a slight chance the second script to run after first checked the lockfile and before it wrote it. So they both will write the lock file and they will both be running. Here is how to make it work for sure:

lockfile=/var/lock/myscript.lock

if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null ; then
  trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
else
  # or you can decide to skip the "else" part if you want
  echo "Another instance is already running!"
fi

The noclobber option will make sure that redirect command will fail if file already exists. So the redirect command is actually atomic - you write and check the file with one command. You don't need to remove the lockfile at the end of file - it'll be removed by the trap. I hope this helps to people that will read it later.

P.S. I didn't see that Mikel already answered the question correctly, although he didn't include the trap command to reduce the chance the lock file will be left over after stopping the script with Ctrl-C for example. So this is the complete solution


This will work, if your script name is unique:

#!/bin/bash
if [ $(pgrep -c $(basename $0)) -gt 1 ]; then 
  echo $(basename $0) is already running
  exit 0
fi

If the scriptname is not unique, this works on most linux distributions:

#!/bin/bash
exec 9>/tmp/my_lock_file
if ! flock -n 9  ; then
   echo "another instance of this script is already running";
   exit 1
fi

source: http://mywiki.wooledge.org/BashFAQ/045


Quick and dirty?

#!/bin/sh

if [ -f sometempfile ]
  echo "Already running... will now terminate."
  exit
else
  touch sometempfile
fi

..do what you want here..

rm sometempfile

I use a simple approach that handles stale lock files.

Note that some of the above solutions that store the pid, ignore the fact that the pid can wrap around. So - just checking if there is a valid process with the stored pid is not enough, especially for long running scripts.

I use noclobber to make sure only one script can open and write to the lock file at one time. Further, I store enough information to uniquely identify a process in the lockfile. I define the set of data to uniquely identify a process to be pid,ppid,lstart.

When a new script starts up, if it fails to create the lock file, it then verifies that the process that created the lock file is still around. If not, we assume the original process died an ungraceful death, and left a stale lock file. The new script then takes ownership of the lock file, and all is well the world, again.

Should work with multiple shells across multiple platforms. Fast, portable and simple.

#!/usr/bin/env sh
# Author: rouble

LOCKFILE=/var/tmp/lockfile #customize this line

trap release INT TERM EXIT

# Creates a lockfile. Sets global variable $ACQUIRED to true on success.
# 
# Returns 0 if it is successfully able to create lockfile.
acquire () {
    set -C #Shell noclobber option. If file exists, > will fail.
    UUID=`ps -eo pid,ppid,lstart $$ | tail -1`
    if (echo "$UUID" > "$LOCKFILE") 2>/dev/null; then
        ACQUIRED="TRUE"
        return 0
    else
        if [ -e $LOCKFILE ]; then 
            # We may be dealing with a stale lock file.
            # Bring out the magnifying glass. 
            CURRENT_UUID_FROM_LOCKFILE=`cat $LOCKFILE`
            CURRENT_PID_FROM_LOCKFILE=`cat $LOCKFILE | cut -f 1 -d " "`
            CURRENT_UUID_FROM_PS=`ps -eo pid,ppid,lstart $CURRENT_PID_FROM_LOCKFILE | tail -1`
            if [ "$CURRENT_UUID_FROM_LOCKFILE" == "$CURRENT_UUID_FROM_PS" ]; then 
                echo "Script already running with following identification: $CURRENT_UUID_FROM_LOCKFILE" >&2
                return 1
            else
                # The process that created this lock file died an ungraceful death. 
                # Take ownership of the lock file.
                echo "The process $CURRENT_UUID_FROM_LOCKFILE is no longer around. Taking ownership of $LOCKFILE"
                release "FORCE"
                if (echo "$UUID" > "$LOCKFILE") 2>/dev/null; then
                    ACQUIRED="TRUE"
                    return 0
                else
                    echo "Cannot write to $LOCKFILE. Error." >&2
                    return 1
                fi
            fi
        else
            echo "Do you have write permissons to $LOCKFILE ?" >&2
            return 1
        fi
    fi
}

# Removes the lock file only if this script created it ($ACQUIRED is set), 
# OR, if we are removing a stale lock file (first parameter is "FORCE") 
release () {
    #Destroy lock file. Take no prisoners.
    if [ "$ACQUIRED" ] || [ "$1" == "FORCE" ]; then
        rm -f $LOCKFILE
    fi
}

# Test code
# int main( int argc, const char* argv[] )
echo "Acquring lock."
acquire
if [ $? -eq 0 ]; then 
    echo "Acquired lock."
    read -p "Press [Enter] key to release lock..."
    release
    echo "Released lock."
else
    echo "Unable to acquire lock."
fi

To make locking reliable you need an atomic operation. Many of the above proposals are not atomic. The proposed lockfile(1) utility looks promising as the man-page mentioned, that its "NFS-resistant". If your OS does not support lockfile(1) and your solution has to work on NFS, you have not many options....

NFSv2 has two atomic operations:

  • symlink
  • rename

With NFSv3 the create call is also atomic.

Directory operations are NOT atomic under NFSv2 and NFSv3 (please refer to the book 'NFS Illustrated' by Brent Callaghan, ISBN 0-201-32570-5; Brent is a NFS-veteran at Sun).

Knowing this, you can implement spin-locks for files and directories (in shell, not PHP):

lock current dir:

while ! ln -s . lock; do :; done

lock a file:

while ! ln -s ${f} ${f}.lock; do :; done

unlock current dir (assumption, the running process really acquired the lock):

mv lock deleteme && rm deleteme

unlock a file (assumption, the running process really acquired the lock):

mv ${f}.lock ${f}.deleteme && rm ${f}.deleteme

Remove is also not atomic, therefore first the rename (which is atomic) and then the remove.

For the symlink and rename calls, both filenames have to reside on the same filesystem. My proposal: use only simple filenames (no paths) and put file and lock into the same directory.


Try something like the below,

ab=`ps -ef | grep -v grep | grep -wc processname`

Then match the variable with 1 using an if loop.


Examples related to bash

Comparing a variable with a string python not working when redirecting from bash script Zipping a file in bash fails How do I prevent Conda from activating the base environment by default? Get first line of a shell command's output Fixing a systemd service 203/EXEC failure (no such file or directory) /bin/sh: apt-get: not found VSCode Change Default Terminal Run bash command on jenkins pipeline How to check if the docker engine and a docker container are running? How to switch Python versions in Terminal?

Examples related to shell

Comparing a variable with a string python not working when redirecting from bash script Get first line of a shell command's output How to run shell script file using nodejs? Run bash command on jenkins pipeline Way to create multiline comments in Bash? How to do multiline shell script in Ansible How to check if a file exists in a shell script How to check if an environment variable exists and get its value? Curl to return http status code along with the response docker entrypoint running bash script gets "permission denied"

Examples related to process

Fork() function in C How to kill a nodejs process in Linux? Xcode process launch failed: Security Understanding PrimeFaces process/update and JSF f:ajax execute/render attributes Linux Script to check if process is running and act on the result CreateProcess error=2, The system cannot find the file specified How to make parent wait for all child processes to finish? How to use [DllImport("")] in C#? Visual Studio "Could not copy" .... during build How to terminate process from Python using pid?

Examples related to lockfile

Waiting for another flutter command to release the startup lock Do I commit the package-lock.json file created by npm 5? What is the best way to ensure only one instance of a Bash script is running? Quick-and-dirty way to ensure only one instance of a shell script is running at a time