From 0022a2b29e26fdea49fe61da482ab518418337e9 Mon Sep 17 00:00:00 2001
From: Greg Althaus <galthaus@austin.rr.com>
Date: Tue, 17 Jan 2017 13:15:48 -0600
Subject: [PATCH] Add doc updates.

---
 docs/ha-mode.md | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/docs/ha-mode.md b/docs/ha-mode.md
index 8ec5c93a1..0baa4fabd 100644
--- a/docs/ha-mode.md
+++ b/docs/ha-mode.md
@@ -33,15 +33,20 @@ proxy. Kargo includes support for an nginx-based proxy that resides on each
 non-master Kubernetes node. This is referred to as localhost loadbalancing. It
 is less efficient than a dedicated load balancer because it creates extra
 health checks on the Kubernetes apiserver, but is more practical for scenarios
-where an external LB or virtual IP management is inconvenient.
-
-This option is configured by the variable `loadbalancer_apiserver_localhost`.
-you will need to configure your own loadbalancer to achieve HA. Note that
-deploying a loadbalancer is up to a user and is not covered by ansible roles
-in Kargo. By default, it only configures a non-HA endpoint, which points to
-the `access_ip` or IP address of the first server node in the `kube-master`
-group. It can also configure clients to use endpoints for a given loadbalancer
-type. The following diagram shows how traffic to the apiserver is directed.
+where an external LB or virtual IP management is inconvenient.  This option is
+configured by the variable `loadbalancer_apiserver_localhost`.  You may also
+define the port the local internal loadbalancer users by changing,
+`nginx_kube_apiserver_port`.  This defaults to the value of `kube_apiserver_port`.
+It is also import to note that Kargo will only configure kubelet and kube-proxy
+on non-master nodes to use the local internal loadbalancer.
+
+If you choose to NOT use the local internal loadbalancer, you will need to configure
+your own loadbalancer to achieve HA. Note that deploying a loadbalancer is up to
+a user and is not covered by ansible roles in Kargo. By default, it only configures
+a non-HA endpoint, which points to the `access_ip` or IP address of the first server
+node in the `kube-master` group. It can also configure clients to use endpoints
+for a given loadbalancer type. The following diagram shows how traffic to the
+apiserver is directed.
 
 ![Image](figures/loadbalancer_localhost.png?raw=true)
 
@@ -90,7 +95,7 @@ Access endpoints are evaluated automagically, as the following:
 
 | Endpoint type                | kube-master   | non-master          |
 |------------------------------|---------------|---------------------|
-| Local LB                     | http://lc:p   | https://lc:sp       |
+| Local LB                     | http://lc:p   | https://lc:nsp      |
 | External LB, no internal     | https://lb:lp | https://lb:lp       |
 | No ext/int LB (default)      | http://lc:p   | https://m[0].aip:sp |
 
@@ -99,7 +104,9 @@ Where:
 * `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
 * `lc` - localhost;
 * `p` - insecure port, `kube_apiserver_insecure_port`
+* `nsp` - nginx secure port, `nginx_kube_apiserver_port`;
 * `sp` - secure port, `kube_apiserver_port`;
 * `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
 * `ip` - the node IP, defers to the ansible IP;
 * `aip` - `access_ip`, defers to the ip.
+
-- 
GitLab