Home  »  Linux  »  Wget Trick to Download from Restrictive Sites

by 15 comments

wget 403 ForbiddenAfter trick
wget bypassing restrictions
I am often logged in to my servers via SSH, and I need to download a file like a WordPress plugin. I've noticed many sites now employ a means of blocking robots like wget from accessing their files. Most of the time they use .htaccess to do this. So a permanent workaround has wget mimick a normal browser.

Testing Wget Trick

Just add the -d option. Like: $ wget -O/dev/null -d

GET / HTTP/1.1
Referer: /
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101 Firefox/10.0.2
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Connection: keep-alive
Accept-Language: en-us,en;q=0.5

Wget Function

Rename to wget to replace wget.

function wgets()
  local H='--header'
  wget $H='Accept-Language: en-us,en;q=0.5' $H='Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' $H='Connection: keep-alive' -U 'Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101 Firefox/10.0.2' --referer=/ "$@";

Wget alias

Add this to your .bash_profile or other shell startup script, or just type it at the prompt. Now just run wget from the command line as usual, i.e. wget -dnv /sitemap.xml.

alias wgets='H="--header"; wget $H="Accept-Language: en-us,en;q=0.5" $H="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" $H="Connection: keep-alive" -U "Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101 Firefox/10.0.2" --referer=/ '

Using custom .wgetrc

Alternatively, and probably the best way, you could instead just create or modify your $HOME/.wgetrc file like this. Or download and rename to .wgetrc.wgetrc. Now just run wget from the command line as usual, i.e. wget -dnv /sitemap.xml.

### Sample Wget initialization file .wgetrc by
## Local settings (for a user to set in his $HOME/.wgetrc).  It is
## *highly* undesirable to put these settings in the global file, since
## they are potentially dangerous to "normal" users.
## Even when setting up your own ~/.wgetrc, you should know what you
## are doing before doing so.
header = Accept-Language: en-us,en;q=0.5
header = Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
header = Connection: keep-alive
user_agent = Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101 Firefox/10.0.2
referer = /
robots = off

Other command line

wget --referer="" --user-agent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20070725 Firefox/" --header="Accept:
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5" --header="Accept-Language: en-us,en;q=0.5" --header="Accept-Encoding: gzip,deflate"
--header="Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" --header="Keep-Alive: 300" -dnv /sitemap.xml

Wget Alternative

Once you get tired of how basic wget is, start using curl, which is 100x better.


September 6th, 2007

Comments Welcome

  • Torrid Luna

    There's an uncommented feature "robots=off", if all else fails, you could add that to your list.


  • Deserio

    I was looking for "user agent", works great for me, specially with rapidshare.

  • lien

    This can also be used to see if competitors web sites are feeding google spider food. Just set your user agent to googlebot.

  • AskApache

    @ lien

    Nice idea there, haven't tried that yet, but I plan on it..

  • Tensigh

    Didn't work for me -- the "Accept" lines returned errors. *SIGH*

  • clutkin

    Advanced examples have existed for years on wget wikipedia. These examples are great, please update any errors if there are any. This website is now a reference wikipedia link Advanced Examples. PLEASE DO NOT BREAK THE LINKS BY RENAMING/MOVING THIS WEB PAGE. {tjc}

  • R Raman

    Thanks a TON for the very detailed examples. Not only was I able to get my error page (403 Forbidden page) using the command line example, but .wgetrc also worked like a charm.

  • tim

    When I use this, I get an error:

    --header=Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7: command not found

    Any idea why that might be happening?

  • Abhishek D

    Awesome !!

  • Swagnik Mitra

    You will have to keep changing your Gecko details every 2-3 times. The server side bot identifies there's something wrong with the incoming wget request and then 403's it.

    Any smart solution for this?

  • Darrius

    This also used to work using download managers that could manually set referrers, but in doing some security testing today I am seeing that is failing. Did Apache get smarter? (not complaining)

  • http://URL palamin

    Everyone says that wget is sooooo basic, but it can do recursive download. Can curl do that? It might be offtopic here but if anyone knows how, please let me know.

  • http://URL yotam

    Your example with

    --header="Keep-Alive: 300"

    helped me download successfully from Diino(dot)com.

  • Totgia

    I like the "other command line^" section since it is simpler to apply for me:)

  • David MacKay

    The .wgetrc method worked nicely for me. THANKS!

Related Articles

My Online Tools
Popular Articles

Hacking and Hackers

The use of "hacker" to mean "security breaker" is a confusion on the part of the mass media. We hackers refuse to recognize that meaning, and continue using the word to mean someone who loves to program, someone who enjoys playful cleverness, or the combination of the two. See my article, On Hacking.
-- Richard M. Stallman


It's very simple - you read the protocol and write the code. -Bill Joy

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 3.0 License, just credit with a link.
This site is not supported or endorsed by The Apache Software Foundation (ASF). All software and documentation produced by The ASF is licensed. "Apache" is a trademark of The ASF. NCSA HTTPd.
UNIX ® is a registered Trademark of The Open Group. POSIX ® is a registered Trademark of The IEEE.

+Askapache | askapache

Site Map | Contact Webmaster | License and Disclaimer | Terms of Service

↑ TOPMain