[release-2.25] Refactor and expand download_hash.py (#11539)
* download_hash.py: generalized and data-driven
The script is currently limited to one hardcoded URL for kubernetes
related binaries, and a fixed set of architectures.
The solution is three-fold:
1. Use an url template dictionary for each download -> this allow to easily
add support for new downloads.
2. Source the architectures to search from the existing data
3. Enumerate the existing versions in the data and start searching from
the last one until no newer version is found (newer in the version
order sense, irrespective of actual age)
* download_hash.py: support for 'multi-hash' file + runc
runc upstream does not provide one hash file per assets in their
releases, but one file with all the hashes.
To handle this (and/or any arbitrary format from upstreams), add a
dictionary mapping the name of the download to a lambda function which
transform the file provided by upstream into a dictionary of hashes,
keyed by architecture.
* download_hash: argument handling with argparse
Allow the script to be called with a list of components, to only
download new versions checksums for those.
By default, we get new versions checksums for all supported (by the
script) components.
* download_hash: propagate new patch versions to all archs
* download_hash: add support for 'simple hash' components
* download_hash: support 'multi-hash' components
* download_hash: document missing support
* download_hash: use persistent session
This allows to reuse http connection and be more efficient.
From rough measuring it saves around 25-30% of execution time.
* download_hash: cache request for 'multi-hash' files
This avoid re-downloading the same file for different arch and
re-parsing it
* download_hash: document usage
---------
Co-authored-by: Max Gautier <mg@max.gautier.name>
Loading
Please register or sign in to comment