首页  编辑  

CURL 下载某个页面所有文件的简单方法

Tags: /计算机文档/Linux & Unix/   Date Created:
How to use curl to download all files of a web page, not use wget!

I need to download some/multipul files from a web page, but I can't use wget because I'm behind a firewall with socks proxy, the wget only support http_proxy, ftp_proxy, https_proxy, can't support socsk_proxy, and curl command support socks_proxy environment variable!
for example:
export socks_proxy=socks://127.0.0.1:1234
export http_proxy=socks://127.0.0.1:1234
export https_proxy=socks://127.0.0.1:1234
that web page linked hundreds files, and I want to download all of them.
How to use curl to do it?

我在防火墙后面,有个socks代理,我想通过代理服务器下载一个网页上所有的文件,但wget不能支持socks代理环境变量,curl却可以,可是curl没有递归功能,如何能做到呢?

Answer 解答: 
The simples, easy, quick way to dowload all file in a web page by curl command:

For example, if you want to download all files of http://www.serverurl.com/download page, do it like below:
for file in $(
        curl -s http://www.serverurl.com/download/ | sed -n 's/.*[HREF|href]="\([^"]*\).*/\1/p'); do 

    curl -s -O http://www.serverurl.com/download/$filedone;