Assuming I'm a big unix rookie:
I'm running a curl request through cron every 15 minutes.
Curl basically is used to load a web page (PHP) that given some arguments, acts as a script like:
curl http://example.com/?update_=1
What I would like to achieve is to run another "script" using this curl technique,
I have read that curl accepts multiple URLs in one command but I'm unsure if this would process the ulrs sequentially or in "parallel".
I think this uses more native capabilities
//printing the links to a file
$ echo "https://stackoverflow.com/questions/3110444/
https://stackoverflow.com/questions/8445445/
https://stackoverflow.com/questions/4875446/" > links_file.txt
$ xargs curl < links_file.txt
Enjoy!
You can specify any amount of URLs on the command line. They will be fetched in a sequential manner in the specified order.
Write a script with two curl requests in desired order and run it by cron, like
#!/bin/bash
curl http://mysite.com/?update_=1
curl http://mysite.com/?the_other_thing
Another crucial method not mentioned here is using the same TCP connection for multiple HTTP requests, and exactly one curl command for this.
This is very useful to save network bandwidth, client and server resources, and overall the need of using multiple curl commands, as curl by default closes the connection when end of command is reached.
Keeping the connection open and reusing it is very common for standard clients running a web-app.
Starting curl version 7.36.0, the --next
or -:
command-line option allows to chain multiple requests, and usable both in command-line and scripting.
For example:
curl http://example.com/?update_=1 -: http://example.com/foo
curl http://example.com/?update_=1 -: -d "I am posting this string" http://example.com/?update_=2
curl -o 'my_output_file' http://example.com/?update_=1 -: -d "my_data" -s -m 10 http://example.com/foo -: -o /dev/null http://example.com/random
From the curl manpage:
-:, --next
Tells curl to use a separate operation for the following URL and associated options. This allows you to send several URL requests, each with their own specific options, for example, such as different user names or custom requests for each.
-:
,--next
will reset all local options and only global ones will have their values survive over to the operation following the -:, --next instruction. Global options include -v, --verbose, --trace, --trace-ascii and --fail-early.For example, you can do both a GET and a POST in a single command line:
curl www1.example.com --next -d postthis www2.example.com
Added in 7.36.0.
This will do what you want, uses an input file and is super fast
#!/bin/bash
IFS=$'\n'
file=/path/to/input.txt
lines=$(cat ${file})
for line in ${lines}; do
curl "${line}"
done
IFS=""
exit ${?}
one entry per line on your input file, it will follow the order of your input file
save it as whatever.sh and make it executable
According to the curl man page:
You can specify any amount of URLs on the command line. They will be fetched in a sequential manner in the specified order.
So the simplest and most efficient (curl will send them all down a single TCP connection [those to the same origin]) approach would be put them all on a single invocation of curl e.g.:
curl http://example.com/?update_=1 http://example.com/?update_=2
Source: Stackoverflow.com