Hi All
Firstly many thanks to all who replied (too many to name).
The most popular answer seemed to be "Use Wget".
Quoting Friedel Loinger
You can do this with GNU Wget, which is a non-interactive utility to retrieve files from the web.
Wget 1.4.3 is available at: <URL:ftp://prep.ai.mit.edu/pub/gnu/wget-1.4.3.tar.gz>
Quoting Wales K. F. Wong
You can use lynx -dump URL > output in your cron job.
Quoting Mark Parry:
check out the web for a perl script called 'webcopy' - this will copy files, recursively if required, with/without
images and even has the ability to go through an authenticating firewall.
http://www.inf.utfsm.cl/~vparada/webcopy.html
Quoting Billy Constantine
perl -MLWP::Simple -e 'open(OUT, ">$ENV{HOME}/somefile")||die("COuldn't open somefile:
$!\n");select(OUT);getprint("http://www.whatever.com/filename.html")' you'll need perl and the libnet and
libwww-perl modules installed (possibly others, i forget which other modules are required by those 2)...
If anyone wants any copies of the e-mails I received drop me a message and I'll sort it out.
Once again many thanks.
Liz
"""
(O O)
|------------------oOO--(_)--OOo-----------------|
Liz Marlow - UCES Computer Manager
University of Wales, Bangor
Marine Science Laboratories
Menai Bridge e.mail: esm@uces.bangor.ac.uk
Anglesey Tel : +44 (0)1248 713808
N. Wales Mobile: +44 (0)973 543867
LL59 5EY Fax : +44 (0)1248 716729
|------------------------------------------------|
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:12:29 CDT