download(url, *args, filename=None, save=True, parallel=True, die=True, verbose=True, **kwargs)[source]#

Download one or more URLs in parallel and return output or save them to disk.

A parallelized wrapper for sc.urlopen(), except with save=True by default.

  • url (str/list/dict) – either a single URL, a list of URLs, or a dict of key:URL or filename:URL pairs

  • *args (list) – additional URLs to download

  • filename (str/list) – either a string or a list of the same length as url (if not supplied, return output)

  • save (bool) – if supplied instead of filename, then use the default filename

  • parallel (bool) – whether to download multiple URLs in parallel

  • die (bool) – whether to raise an exception if a URL can’t be retrieved (default true)

  • verbose (bool) – whether to print progress (if verbose=2, print extra detail on each downloaded URL)

  • **kwargs (dict) – passed to sc.urlopen()


html ='') # Download a single URL
data ='', '', save=False) # Download two in parallel{'sciris.html':'', 'covasim.html':''}) # Download two and save to disk['', ''], filename=['sciris.html', 'covasim.html']) # Ditto
data ='', covasim=''), save=False) # Download and store in memory
New in version 2.0.0.
New in version 3.0.0: “die” argument
New in version 3.1.1: default order switched from URL:filename to filename:URL pairs
New in version 3.1.3: output as objdict instead of odict