[linux] How do I request a file but not save it with Wget?

I'm using Wget to make http requests to a fresh web server. I am doing this to warm the MySQL cache. I do not want to save the files after they are served.

wget -nv -do-not-save-file $url

Can I do something like -do-not-save-file with wget?

This question is related to linux caching wget

The answer is


Curl does that by default without any parameters or flags, I would use it for your purposes:

curl $url > /dev/null 2>&1

Curl is more about streams and wget is more about copying sites based on this comparison.


You can use -O- (uppercase o) to redirect content to the stdout (standard output) or to a file (even special files like /dev/null /dev/stderr /dev/stdout )

wget -O- http://yourdomain.com

Or:

wget -O- http://yourdomain.com > /dev/null

Or: (same result as last command)

wget -O/dev/null http://yourdomain.com

Examples related to linux

grep's at sign caught as whitespace How to prevent Google Colab from disconnecting? "E: Unable to locate package python-pip" on Ubuntu 18.04 How to upgrade Python version to 3.7? Install Qt on Ubuntu Get first line of a shell command's output Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running? Run bash command on jenkins pipeline How to uninstall an older PHP version from centOS7 How to update-alternatives to Python 3 without breaking apt?

Examples related to caching

Disable nginx cache for JavaScript files How to prevent Browser cache on Angular 2 site? Curl command without using cache Notepad++ cached files location Laravel 5 Clear Views Cache Write-back vs Write-Through caching? Tomcat 8 throwing - org.apache.catalina.webresources.Cache.getResource Unable to add the resource Chrome - ERR_CACHE_MISS How do I use disk caching in Picasso? How to clear gradle cache?

Examples related to wget

How to `wget` a list of URLs in a text file? How to install wget in macOS? wget ssl alert handshake failure How to run wget inside Ubuntu Docker image? Unable to establish SSL connection upon wget on Ubuntu 14.04 LTS How to use Python requests to fake a browser visit a.k.a and generate User Agent? wget/curl large file from google drive wget: unable to resolve host address `http' Python equivalent of a given wget command How to download HTTP directory with all files and sub-directories as they appear on the online files/folders list?