Skip to content
Snippets Groups Projects
Commit f4a4b45f authored by kaiyou's avatar kaiyou
Browse files

Add documentation about setting up a CNI

parent be4fe505
No related branches found
No related tags found
No related merge requests found
......@@ -48,6 +48,9 @@ Once the cluster has stabilized, you may run kubectl inside the node:
hepto kubectl --cluster lab --name mynode -- get node
```
At this point and before setting up a CNI and exposing the kubernetes API,
`hepto kubectl` is the only supported way to access the cluster.
## Multi-node cluster
When running a multi-node cluster, at least one node in the cluster should have a stable
......@@ -83,12 +86,108 @@ hepto start \
--anchors dead::beef
```
## Adding a CNI
## Setting up networking
Hepto comes without a CNI, it is tested against both Calico and Cilium.
We will later provide detailed instructions about properly setting up the CNI in
a VPN-encapsulated IPv6-only network.
### Deploying Calico
Calico is a powerful CNI that bundles an iptables-based CNI, an eBPF path for
kube-proxy-less behavior, plus network policy implementations.
Currently (`v3.25.0`), Calico support for IPv6 eBPF is limited. Until it is
properly implemented (active work is in progress), Calico must be deployed with
iptables support on top of `kube-proxy`.
Start by deploying `kube-proxy`, see and adapt `docs/kube-proxy.yaml`, especially
API server address and cluster CIDR as provided by `hepto info`, then:
```
hepto kubectl --cluster lab --node mynode -- create -f - < kube-proxy.yaml
```
Once stabilized, service routing should be setup in the cluster, time to enable
Calico using the Tigera operator:
```
curl curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml \
| hepto kubectl --cluster lab --node mynode -- create -f -
```
Then enable Calico using the dedicated CRD. This configuration enables VXLAN on
top of the mesh VPN. This is the most tested and probably most reliable Calico
mode, despite a slight overhead:
```
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
// Enforce the use of an IPv6 Docker registry for Calico images
registry: registry.ipv6.docker.com
calicoNetwork:
ipPools:
- blockSize: 122
// This is the pod CIDR again
cidr: fdfd:9432:32b3:200::/56
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
```
### Deploying Cilium
Cilium has limited but existing support for IPv6-only mode in its eBPF path.
It still requires some iptables magic for masquerading, that is not supported by
the eBPF programs.
Start by creating a Helm values file for the official Cilium chart:
```
// APIserver address, get it from hepto info
k8sServiceHost: "fdfd:9432:32b3:100:302:1a88:9151:9627"
k8sServicePort: 6443
cgroup:
hostRoot: /sys/fs/cgroup
// Enable ipv6-only mode
ipv6:
enabled: true
ipv4:
enabled: false
// VTEP tunnel is not supported by Cilium without IPv4, switch to
// direct node routing instead
tunnel: disabled
autoDirectNodeRoutes: true
// Do not deploy default iptables rules, and diable kube-proxy
// support in favor of eBPF-only service path, except for host
// traffic which is not supported in IPv6-only mode
installIptablesRules: false
l7Proxy: false
bpf:
hostLegacyRouting: true
// Tell Cilium to prefer the wireguard interface over public
// network
devices: wg0
// IPv6-only supports cluster pool CIDR, get it from
// hepto-info, /104 is the largest supported CIDR size
ipam:
operator:
clusterPoolIPv6PodCIDRList: "fdfd:9432:32b3:200::/104"
ipv6NativeRoutingCIDR: "fdfd:9432:32b3:200::/56"
```
Then deploy the official Helm chart:
```
helm template cilium/cilium -f values.yaml | hepto kubectl --cluster lab --node mynode -- create -f -
```
## Development
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment