Skip to content
Snippets Groups Projects
Select Git revision
  • 390764c2b4e5777c74ef52ab5e4199ea34151e69
  • master default protected
  • v2.28.0
  • v2.27.0
  • v2.25.1
  • v2.24.3
  • v2.26.0
  • v2.24.2
  • v2.25.0
  • v2.24.1
  • v2.22.2
  • v2.23.3
  • v2.24.0
  • v2.23.2
  • v2.23.1
  • v2.23.0
  • v2.22.1
  • v2.22.0
  • v2.21.0
  • v2.20.0
  • v2.19.1
  • v2.18.2
22 results

large-deploymets.md

Blame
  • user avatar
    Bogdan Dobrelya authored
    * Add the retry_stagger var to tweak push and retry time strategies.
    * Add large deployments related docs.
    
    Signed-off-by: default avatarBogdan Dobrelya <bdobrelia@mirantis.com>
    390764c2
    History
    large-deploymets.md 825 B

    Large deployments of K8s

    For a large scaled deployments, consider the following configuration changes:

    • Tune ansible settings for forks and timeout vars to fit large numbers of nodes being deployed.

    • Override containers' foo_image_repo vars to point to intranet registry.

    • Override the download_run_once: true to download binaries and container images only once then push to nodes in batches.

    • Adjust the retry_stagger global var as appropriate. It should provide sane load on a delegate (the first K8s master node) then retrying failed push or download operations.

    For example, when deploying 200 nodes, you may want to run ansible with --forks=50, --timeout=600 and define the retry_stagger: 60.