Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
K
Kubespray
Manage
Activity
Members
Code
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Deploy
Model registry
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Mirror
Kubespray
Commits
0022a2b2
Commit
0022a2b2
authored
8 years ago
by
Greg Althaus
Browse files
Options
Downloads
Patches
Plain Diff
Add doc updates.
parent
6905edbe
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
docs/ha-mode.md
+17
-10
17 additions, 10 deletions
docs/ha-mode.md
with
17 additions
and
10 deletions
docs/ha-mode.md
+
17
−
10
View file @
0022a2b2
...
...
@@ -33,15 +33,20 @@ proxy. Kargo includes support for an nginx-based proxy that resides on each
non-master Kubernetes node. This is referred to as localhost loadbalancing. It
is less efficient than a dedicated load balancer because it creates extra
health checks on the Kubernetes apiserver, but is more practical for scenarios
where an external LB or virtual IP management is inconvenient.
This option is configured by the variable
`loadbalancer_apiserver_localhost`
.
you will need to configure your own loadbalancer to achieve HA. Note that
deploying a loadbalancer is up to a user and is not covered by ansible roles
in Kargo. By default, it only configures a non-HA endpoint, which points to
the
`access_ip`
or IP address of the first server node in the
`kube-master`
group. It can also configure clients to use endpoints for a given loadbalancer
type. The following diagram shows how traffic to the apiserver is directed.
where an external LB or virtual IP management is inconvenient. This option is
configured by the variable
`loadbalancer_apiserver_localhost`
. You may also
define the port the local internal loadbalancer users by changing,
`nginx_kube_apiserver_port`
. This defaults to the value of
`kube_apiserver_port`
.
It is also import to note that Kargo will only configure kubelet and kube-proxy
on non-master nodes to use the local internal loadbalancer.
If you choose to NOT use the local internal loadbalancer, you will need to configure
your own loadbalancer to achieve HA. Note that deploying a loadbalancer is up to
a user and is not covered by ansible roles in Kargo. By default, it only configures
a non-HA endpoint, which points to the
`access_ip`
or IP address of the first server
node in the
`kube-master`
group. It can also configure clients to use endpoints
for a given loadbalancer type. The following diagram shows how traffic to the
apiserver is directed.

...
...
@@ -90,7 +95,7 @@ Access endpoints are evaluated automagically, as the following:
| Endpoint type | kube-master | non-master |
|------------------------------|---------------|---------------------|
| Local LB | http://lc:p | https://lc:sp
|
| Local LB | http://lc:p | https://lc:
n
sp |
| External LB, no internal | https://lb:lp | https://lb:lp |
| No ext/int LB (default) | http://lc:p | https://m[0].aip:sp |
...
...
@@ -99,7 +104,9 @@ Where:
*
`lb`
- LB FQDN,
`apiserver_loadbalancer_domain_name`
;
*
`lc`
- localhost;
*
`p`
- insecure port,
`kube_apiserver_insecure_port`
*
`nsp`
- nginx secure port,
`nginx_kube_apiserver_port`
;
*
`sp`
- secure port,
`kube_apiserver_port`
;
*
`lp`
- LB port,
`loadbalancer_apiserver.port`
, defers to the secure port;
*
`ip`
- the node IP, defers to the ansible IP;
*
`aip`
-
`access_ip`
, defers to the ip.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment