[wget] How do I use Wget to download all images into a single folder, from a URL?

I am using wget to download all images from a website and it works fine but it stores the original hierarchy of the site with all the subfolders and so the images are dotted around. Is there a way so that it will just download all the images into a single folder? The syntax I'm using at the moment is:

wget -r -A jpeg,jpg,bmp,gif,png http://www.somedomain.com

This question is related to wget

The answer is


wget utility retrieves files from World Wide Web (WWW) using widely used protocols like HTTP, HTTPS and FTP. Wget utility is freely available package and license is under GNU GPL License. This utility can be install any Unix-like Operating system including Windows and MAC OS. It’s a non-interactive command line tool. Main feature of Wget is it’s robustness. It’s designed in such way so that it works in slow or unstable network connections. Wget automatically start download where it was left off in case of network problem. Also downloads file recursively. It’ll keep trying until file has be retrieved completely.

Install wget in linux machine sudo apt-get install wget

Create a folder where you want to download files . sudo mkdir myimages cd myimages

Right click on the webpage and for example if you want image location right click on image and copy image location. If there are multiple images then follow the below:

If there are 20 images to download from web all at once, range starts from 0 to 19.

wget http://joindiaspora.com/img{0..19}.jpg


According to the man page the -P flag is:

-P prefix --directory-prefix=prefix Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is . (the current directory).

This mean that it only specifies the destination but where to save the directory tree. It does not flatten the tree into just one directory. As mentioned before the -nd flag actually does that.

@Jon in the future it would be beneficial to describe what the flag does so we understand how something works.


I wrote a shellscript that solves this problem for multiple websites: https://github.com/eduardschaeli/wget-image-scraper

(Scrapes images from a list of urls with wget)


Try this one:

wget -nd -r -P /save/location/ -A jpeg,jpg,bmp,gif,png http://www.domain.com

and wait until it deletes all extra information


wget -nd -r -l 2 -A jpg,jpeg,png,gif http://t.co
  • -nd: no directories (save all files to the current directory; -P directory changes the target directory)
  • -r -l 2: recursive level 2
  • -A: accepted extensions
wget -nd -H -p -A jpg,jpeg,png,gif -e robots=off example.tumblr.com/page/{1..2}
  • -H: span hosts (wget doesn't download files from different domains or subdomains by default)
  • -p: page requisites (includes resources like images on each page)
  • -e robots=off: execute command robotos=off as if it was part of .wgetrc file. This turns off the robot exclusion which means you ignore robots.txt and the robot meta tags (you should know the implications this comes with, take care).

Example: Get all .jpg files from an exemplary directory listing:

$ wget -nd -r -l 1 -A jpg http://example.com/listing/

The proposed solutions are perfect to download the images and if it is enough for you to save all the files in the directory you are using. But if you want to save all the images in a specified directory without reproducing the entire hierarchical tree of the site, try to add "cut-dirs" to the line proposed by Jon.

wget -r -P /save/location -A jpeg,jpg,bmp,gif,png http://www.boia.de --cut-dirs=1 --cut-dirs=2 --cut-dirs=3

in this case cut-dirs will prevent wget from creating sub-directories until the 3th level of depth in the website hierarchical tree, saving all the files in the directory you specified.You can add more 'cut-dirs' with higher numbers if you are dealing with sites with a deep structure.