If you're on a GUI-less Linux server and need to download files from a remote location, you should turn to wget. Find out how to use the command.
If you maintain a Linux server that doesn't include a GUI, you may find yourself at a loss for downloading files. You could use FTP, but that would depend upon the files you need being stored on an ftp server, and FTP is not nearly as prevalent as it once was. What do you do? You use wget.
The wget tool has been around since 1996, and you'll be glad you have it when it's needed. It's a very straight-forward tool to use, and it comes with extra options that can come in handy. I'll walk you through the basic usage of wget and how to use some of those options, so you don't have to jump through hoops to get those files onto your GUI-less servers.
You don't have to worry about installing wget, because it is on most Linux distributions (desktop and server) by default. Let's get to work.
SEE: Power checklist: Managing and troubleshooting Linux user accounts (Tech Pro Research)
How to use wget
The most basic usage of wget is (URL is the exact address of the file you want to download):
That command will download the necessary file into the current working directory.
If you want to download the file under a different name, you can do that using the -O option like so (NEW_NAME is the name you want to save the file under, and URL is the exact address of the download):
wget -O NEW_NAME URL
This can be useful when you need to download the same file, multiple times, under different names.
When you run the wget command it can gobble up a significant amount of bandwidth during the download process. If you're on a server that requires as much bandwidth as possible to function properly, you might need to limit the speed of wget downloads. This can be done with the --limit-rate option like so:
wget --limit-rate=RATE URL
Where RATE is a value in either kilobytes (k) or megabytes (m). Say you want to limit the download to 2MB/sec; that command would look like (URL is the actual address of the download):
wget --limit-rate=2m URL
Resuming an interrupted file
Say you're downloading a file and, for whatever reason, the download is interrupted. You can tell wget to pick up that download where it left off by issuing the following command (URL is the same URL you used when first attempting to download the file):
wget -c URL
Downloading multiple files at once
You can use wget to download multiple files in one session. To do this you must create a text file with the exact file URLs for downloading, one per line, like so:
http://URL/filename http://URL/filename2 http://URL/filename3
Add as many addresses as you need and save the file. To have wget download from that file, issue the command (FILENAME is the name of the file containing the download addresses):
wget -i FILENAME
This technique comes in very handy when you need to download the same group of files on a regular basis.
Download with username and password
If your file source requires authentication, wget is prepared for that as well. To download from an http server that requires username/password authentication, the command would look like this (USER is the username, PASSWORD is the user password, and URL is the exact address of the file to be downloaded):
wget --http-user=USER --http-password=PASSWORD URL
More to offer
The wget command can do so much more. To find out what other options wget has to offer, issue the command man wget and read through the manual page, where you'll find additional information about this incredibly handy command.
- NethServer 7: Major improvements make choosing this server a no-brainer for SMBs (TechRepublic)
- How to make Apache more secure by hiding directory folders (TechRepublic)
- How to work with Networking Profiles in GNOME (TechRepublic)
- How to configure Ubuntu Linux server as a Domain Controller with samba-tool (TechRepublic)
- What is the fastest Linux web browser? (ZDNet)
- Linux poll results: And the winners are... (ZDNet)