External Network (Experimental)
This guide covers how to set up Submariner for the external network use case. In this use case, pods running in a Kubernetes cluster can access external applications outside of the cluster and vice versa by using DNS resolution supported by Lighthouse or manually using the Globalnet ingress IPs. In addition to providing connectivity, the source IP of traffic is also preserved.
Prerequisites
-
Prepare:
- Two or more Kubernetes clusters
- One or more non-cluster hosts that exist in the same network segment to one of the Kubernetes clusters
In this guide, we will use the following Kubernetes clusters and non-cluster host.
Name IP Description cluster-a 192.168.122.26 Single-node cluster cluster-b 192.168.122.27 Single-node cluster test-vm 192.168.122.142 Linux host In this example, everything is deployed in the 192.168.122.0/24 segment. However, it is only required that cluster-a and test-vm are in the same segment. Other clusters, cluster-b and any additional clusters, can be deployed in different segments or even in any other networks in the internet. Also, clusters can be multi-node clusters.
Subnets of non-cluster hosts should be distinguished from those of the clusters to easily specify the external network CIDR. In this example, cluster-a and cluster-b belong to 192.168.122.0/25 and test-vm belongs to 192.168.122.128/25. Therefore, the external network CIDR for this configuration is 192.168.122.128/25. In test environments with just one host, an external network CIDR like 192.168.122.142/32 can be specified. However, design of the subnets need to be considered when more hosts are used.
-
Choose the Pod CIDR and the Service CIDR for Kubernetes clusters and deply them.
In this guide, we will use the following CIDRs:
Cluster Pod CIDR Service CIDR cluster-a 10.42.0.0/24 10.43.0.0/16 cluster-b 10.42.0.0/24 10.43.0.0/16 Note that we will use Globalnet in this guide, therefore overlapping CIDRs are supported.
In this configuration, global IPs are used to access between the gateway node and non-cluster hosts, which means packets are sent to IP addresses that are not part of the actual network segment. To make such packets not to be dropped, anti-spoofing rules need to be disabled for the hosts and the gateway node.
Setup Submariner
Ensure kubeconfig files
Ensure that kubeconfig files for both clusters are available.
This guide assumes cluster-a’s kubeconfig file is named kubeconfig.cluster-a and cluster-b’s is named kubeconfig.cluster-b.
Install subctl
Download the subctl binary and make it available on your PATH.
curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile
If you have Go and the source code, you can build and install subctl instead:
cd go/src/submariner-io/subctl
go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd
(and ensure your go/bin directory is on your PATH).
Use cluster-a as the Broker with Globalnet enabled
subctl deploy-broker --kubeconfig kubeconfig.cluster-a --globalnet
Label gateway nodes
When Submariner joins a cluster to the broker via the subctl join command, it chooses a node on which to install the
gateway by labeling it appropriately. By default, Submariner uses a worker node for the gateway; if there are no worker
nodes, then no gateway is installed unless a node is manually labeled as a gateway. Since we are deploying all-in-one
nodes, there are no worker nodes, so it is necessary to label the single node as a gateway. By default, the node name is
the hostname. In this example, the hostnames are “cluster-a” and “cluster-b”, respectively.
Execute the following on cluster-a:
kubectl label node cluster-a submariner.io/gateway=true
Execute the following on cluster-b:
kubectl label node cluster-b submariner.io/gateway=true
Join cluster-a to the Broker with external CIDR added as cluster CIDR
Carefully review the CLUSTER_CIDR and EXTERNAL_CIDR and run:
CLUSTER_CIDR=10.42.0.0/24
EXTERNAL_CIDR=192.168.122.128/25
subctl join --kubeconfig kubeconfig.cluster-a broker-info.subm --clusterid cluster-a --natt=false --clustercidr=${CLUSTER_CIDR},${EXTERNAL_CIDR}
Join cluster-b to the Broker
subctl join --kubeconfig kubeconfig.cluster-b broker-info.subm --clusterid cluster-b --natt=false
Deploy DNS server on cluster-a for non-cluster hosts
Create a list of upstream DNS servers as upstreamservers:
Note that dnsip is the IP of DNS server for the test-vm, which is defined as nameserver in /etc/resolve.conf.
dnsip=192.168.122.1
lighthousednsip=$(kubectl get svc --kubeconfig kubeconfig.cluster-a -n submariner-operator submariner-lighthouse-coredns -o jsonpath='{.spec.clusterIP}')
cat << EOF > upstreamservers