From 4a463567aca399f4576ecfd4b5fb7e8fedf7efb4 Mon Sep 17 00:00:00 2001
From: Hugo Blom <bl0m1@users.noreply.github.com>
Date: Wed, 11 Mar 2020 13:09:35 +0100
Subject: [PATCH] [Openstack] A guide on how to replace the in-tree
 cloudprovider with the external one (#5741)

* add documentation for how to upgrade to the new external cloud provider

* add migrate_openstack_provider playbook

* fix codeblock syntax highligth

* make docs for migrating cloud provider better

* update grammar

* fix typo

* Make sure the code is correct markdown

* remove Fenced code blocks

* fix markdown syntax

* remove extra lines and fix trailing spaces
---
 docs/openstack.md | 80 +++++++++++++++++++++++++++++++++++++----------
 1 file changed, 64 insertions(+), 16 deletions(-)

diff --git a/docs/openstack.md b/docs/openstack.md
index 906d774a2..031b25788 100644
--- a/docs/openstack.md
+++ b/docs/openstack.md
@@ -23,30 +23,78 @@ In order to make L3 CNIs work on OpenStack you will need to tell OpenStack to al
 
 First you will need the ids of your OpenStack instances that will run kubernetes:
 
-    openstack server list --project YOUR_PROJECT
-    +--------------------------------------+--------+----------------------------------+--------+-------------+
-    | ID                                   | Name   | Tenant ID                        | Status | Power State |
-    +--------------------------------------+--------+----------------------------------+--------+-------------+
-    | e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1  | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running     |
-    | 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2  | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running     |
+  ```bash
+  openstack server list --project YOUR_PROJECT
+  +--------------------------------------+--------+----------------------------------+--------+-------------+
+  | ID                                   | Name   | Tenant ID                        | Status | Power State |
+  +--------------------------------------+--------+----------------------------------+--------+-------------+
+  | e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1  | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running     |
+  | 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2  | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running     |
+  ```
 
 Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports (though they are now configured through using OpenStack):
 
-    openstack port list -c id -c device_id --project YOUR_PROJECT
-    +--------------------------------------+--------------------------------------+
-    | id                                   | device_id                            |
-    +--------------------------------------+--------------------------------------+
-    | 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
-    | e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
+  ```bash
+  openstack port list -c id -c device_id --project YOUR_PROJECT
+  +--------------------------------------+--------------------------------------+
+  | id                                   | device_id                            |
+  +--------------------------------------+--------------------------------------+
+  | 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
+  | e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
+  ```
 
 Given the port ids on the left, you can set the two `allowed-address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
 
-    # allow kube_service_addresses and kube_pods_subnet network
-    openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
-    openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
+  ```bash
+  # allow kube_service_addresses and kube_pods_subnet network
+  openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
+  openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
+  ```
 
 If all the VMs in the tenant correspond to kubespray deployment, you can "sweep run" above with:
 
-    openstack port list --device-owner=compute:nova -c ID -f value | xargs -tI@ openstack port set @ --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
+  ```bash
+  openstack port list --device-owner=compute:nova -c ID -f value | xargs -tI@ openstack port set @ --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
+  ```
 
 Now you can finally run the playbook.
+
+Upgrade from the in-tree to the external cloud provider
+---------------
+
+The in-tree cloud provider is deprecated and will be removed in a future version of Kubernetes. The target release for removing all remaining in-tree cloud providers is set to 1.21
+
+The new cloud provider is configured to have Octavia by default in Kubespray.
+
+- Change cloud provider from `cloud_provider: openstack` to the new external Cloud provider:
+
+  ```yaml
+  cloud_provider: external
+  external_cloud_provider: openstack
+  ```
+
+- Enable Cinder CSI:
+
+  ```yaml
+  cinder_csi_enabled: true
+  ```
+
+- Enable topology support (optional), if your openstack provider has custom Zone names you can override the default "nova" zone by setting the variable `cinder_topology_zones`
+
+  ```yaml
+  cinder_topology: true
+  ```
+
+- If you are using OpenStack loadbalancer(s) replace the `openstack_lbaas_subnet_id` with the new `external_openstack_lbaas_subnet_id`. **Note** The new cloud provider is using Octavia instead of Neutron LBaaS by default!
+- Enable 3 feature gates to allow migration of all volumes and storage classes (if you have any feature gates already set just add the 3 listed below):
+
+  ```yaml
+  kube_feature_gates:
+  - CSIMigration=true
+  - CSIMigrationOpenStack=true
+  - ExpandCSIVolumes=true
+  ```
+
+- Run the `upgrade-cluster.yml` playbook
+- Run the cleanup playbook located under extra_playbooks `extra_playbooks/migrate_openstack_provider.yml` (this will clean up all resources used by the old cloud provider)
+- You can remove the feature gates for Volume migration. If you want to enable the possibility to expand CSI volumes you could leave the `ExpandCSIVolumes=true` feature gate
-- 
GitLab