uawdijnntqw1x1x1
IP : 3.142.200.252
Hostname : ns1.eurodns.top
Kernel : Linux ns1.eurodns.top 4.18.0-553.5.1.lve.1.el7h.x86_64 #1 SMP Fri Jun 14 14:24:52 UTC 2024 x86_64
Disable Function : mail,sendmail,exec,passthru,shell_exec,system,popen,curl_multi_exec,show_source,eval,open_base
OS : Linux
PATH:
/
home
/
..
/
lib
/
python2.7
/
site-packages
/
rhn
/
..
/
urlgrabber
/
__init__.pyc
/
/
� IURc@s>dZdZdZdZdZddlmZmZmZdS(s�A high-level cross-protocol url-grabber. Using urlgrabber, data can be fetched in three basic ways: urlgrab(url) copy the file to the local filesystem urlopen(url) open the remote file and return a file object (like urllib2.urlopen) urlread(url) return the contents of the file as a string When using these functions (or methods), urlgrabber supports the following features: * identical behavior for http://, ftp://, and file:// urls * http keepalive - faster downloads of many files by using only a single connection * byte ranges - fetch only a portion of the file * reget - for a urlgrab, resume a partial download * progress meters - the ability to report download progress automatically, even when using urlopen! * throttling - restrict bandwidth usage * retries - automatically retry a download if it fails. The number of retries and failure types are configurable. * authenticated server access for http and ftp * proxy support - support for authenticated http and ftp proxies * mirror groups - treat a list of mirrors as a single source, automatically switching mirrors if there is a failure. s3.10s 2013/10/09s�Michael D. Stenner <mstenner@linux.duke.edu>, Ryan Tomayko <rtomayko@naeblis.cx>Seth Vidal <skvidal@fedoraproject.org>Zdenek Pavlas <zpavlas@redhat.com>shttp://urlgrabber.baseurl.org/i����(turlgrabturlopenturlreadN( t__doc__t__version__t__date__t __author__t__url__tgrabberRRR(((s7/usr/lib/python2.7/site-packages/urlgrabber/__init__.pyt<module>-s
/home/../lib/python2.7/site-packages/rhn/../urlgrabber/__init__.pyc