[linux] Does WGET timeout?

I'm running a PHP script via cron using Wget, with the following command:

wget -O - -q -t 1 http://www.example.com/cron/run

The script will take a maximum of 5-6 minutes to do its processing. Will WGet wait for it and give it all the time it needs, or will it time out?

This question is related to linux cron wget

The answer is


According to the man page of wget, there are a couple of options related to timeouts -- and there is a default read timeout of 900s -- so I say that, yes, it could timeout.


Here are the options in question :

-T seconds
--timeout=seconds

Set the network timeout to seconds seconds. This is equivalent to specifying --dns-timeout, --connect-timeout, and --read-timeout, all at the same time.


And for those three options :

--dns-timeout=seconds

Set the DNS lookup timeout to seconds seconds.
DNS lookups that don't complete within the specified time will fail.
By default, there is no timeout on DNS lookups, other than that implemented by system libraries.

--connect-timeout=seconds

Set the connect timeout to seconds seconds.
TCP connections that take longer to establish will be aborted.
By default, there is no connect timeout, other than that implemented by system libraries.

--read-timeout=seconds

Set the read (and write) timeout to seconds seconds.
The "time" of this timeout refers to idle time: if, at any point in the download, no data is received for more than the specified number of seconds, reading fails and the download is restarted.
This option does not directly affect the duration of the entire download.


I suppose using something like

wget -O - -q -t 1 --timeout=600 http://www.example.com/cron/run

should make sure there is no timeout before longer than the duration of your script.

(Yeah, that's probably the most brutal solution possible ^^ )


Since in your question you said it's a PHP script, maybe the best solution could be to simply add in your script:

ignore_user_abort(TRUE);

In this way even if wget terminates, the PHP script goes on being processed at least until it does not exceeds max_execution_time limit (ini directive: 30 seconds by default).

As per wget anyay you should not change its timeout, according to the UNIX manual the default wget timeout is 900 seconds (15 minutes), whis is much larger that the 5-6 minutes you need.


The default timeout is 900 second. You can specify different timeout.

-T seconds
--timeout=seconds

The default is to retry 20 times. You can specify different tries.

-t number
--tries=number

link: wget man document


Prior to version 1.14, wget timeout arguments were not adhered to if downloading over https due to a bug.


Examples related to linux

grep's at sign caught as whitespace How to prevent Google Colab from disconnecting? "E: Unable to locate package python-pip" on Ubuntu 18.04 How to upgrade Python version to 3.7? Install Qt on Ubuntu Get first line of a shell command's output Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running? Run bash command on jenkins pipeline How to uninstall an older PHP version from centOS7 How to update-alternatives to Python 3 without breaking apt?

Examples related to cron

How to run a cron job inside a docker container? Run CRON job everyday at specific time How to run a cron job on every Monday, Wednesday and Friday? Spring cron expression for every day 1:01:am How to run a cronjob every X minutes? CronJob not running Scheduling Python Script to run every hour accurately How to set a cron job to run every 3 hours Execute PHP script in cron job How to create a Java cron job

Examples related to wget

How to `wget` a list of URLs in a text file? How to install wget in macOS? wget ssl alert handshake failure How to run wget inside Ubuntu Docker image? Unable to establish SSL connection upon wget on Ubuntu 14.04 LTS How to use Python requests to fake a browser visit a.k.a and generate User Agent? wget/curl large file from google drive wget: unable to resolve host address `http' Python equivalent of a given wget command How to download HTTP directory with all files and sub-directories as they appear on the online files/folders list?