I agree. Getting to know the terminal
can save you lots of time for repeated manual tasks.
One such task: downloading a lot of mp3 (or other) files.
For example, downloading each file from an audio book:
* all media files are mp3 format
* all links appear on the same web page
Getting all the files usually means right-clicking a link,
choosing "save link as " (or "save file as")...
and then choosing a download location,
(or by default the Desktop gets the file, whichever).
Either way, you'd be performing the manual steps
1 time for each file you wanted to download.
Got tired of doing this, so I wrote a bash shell
script that:
1. Grabs the web page source code (using wget) that has the mp3
links.
2. Uses a combination cat and sed to filter out the
extraneous text to produce a text file
with one complete file address per line;
http://www.somesite.com/file1.mp3
http://www.somesite.com/file2.mp3
http://www.somesite.com/file3.mp3
...
3. Uses cat on the final text file (with xargs)| piped to wget
to grab each mp3 file one at a time. Of course, you should
check the script's actions by commenting out the final command
(which downloads the files) and doing a few test runs to make
sure the text file only has mp3 file addresses on each line.
You can pretty much walk away from the computer if that text
file only has mp3 file addresses on each line.
The script does have some dependencies,
such as the links should have somewhat common file paths,
and when filtering with sed, your substitution
parameters will change as well, but once its all set,
saves lots of time and gets the files for you.