Hepto is a kubernetes distribution for geo desployments
Features
Hepto is a very opinionated kubernetes distribution. Here are some design choices, which we think are its features of interest:
- single binary, no dependency except for the kernel,
- automatic cluster discovery based on gossip protocols,
- automatic cluster PKI setup,
- auto-containerization of the k8s stack,
- IPv6-only, from the ground up.
Getting started
Hepto is still highly experimental, so quick disclaimer: please do not host anything critical backed by hepto at the moment, please backup your data, please file issues if you find bugs.
First prepare your node, you will need:
- a public network interface with available IPv6 and a gateway (hepto will start its own container and separate network stack from the host for the kubernetes controlplane to be isolated),
- a recent-enough kernel (tested for >= 5.19),
- a secretly generated security key like
openssl rand -hex 24
, that will be shared among nodes and secure discovery.
In a general environment file, shared by all nodes, write the following config:
export HEPTO_CLUSTER=mycluster
export HEPTO_KEY=deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef
Note that all hepto options are available through enviromnemt variables, see
hepto -help
for details.
Run your first cluster
To start your first cluster, run hepto
in full
mode, which embeds both a
master (apiserver, controller manager, etc.) and a kubelet in a single process.
source env
hepto \
# The node name, data will be stored in /var/lib/mycluster/mymaster
-name mymaster \
# The role, here a master only
-role master \
# The host main network interface
-iface eth0
# Optional, IPv6 for the insance, will autoconfigure based on RA if unset
-ip 2a00::dead:beef/64
# Optional, IPv6 for the gateway, will autoconfigure based on RA if unset
-gw 2a00::1
# Optional, IPv6 for dns servers, will use Cloudflare DNS if unset
-dns 2a00:1234::1234
All data, including a useable kubeconfig, will be stored in <data>/<cluster name>/<node name>
.
Once the cluster has stabilized, you may run kubectl against the apiserver:
kubectl --kubeconfig /var/lib/mycluster/mymaster/kubeconfig get all -A
Multi-node cluster
When running a multi-node cluster, at least one node in the cluster should have a stable IP address for discovery. We call that node an anchor: any other node is able to join the cluster by knowing the cluster name, anchor IP address and the cluster security key.
It is customary but not mandatory for the anchor to also be the cluster master. Clusters running over unstable networks may also have multiple anchors for safety (anchors are only required when bootstraping nodes).
Start by running a master node anywhere:
source env
hepto -name mymaster -role master -iface eth0 -ip 2a00::dead:beef/64
Then on every other node, use the anchor IP address for discovery:
source env
hepto -name mynode1 -role node -iface eth0 -anchor 2a00::dead:beef
Setting up networking
Hepto comes without a CNI, it is tested against both Calico and Cilium.
Displaying hepto network settings
Hepto secures communications between nodes using a wireguard mesh VPN. Private IPv6 prefixes are allocated for the mesh VPN (node addresses), pod addresses and service addresses. Addresses are derived deterministically from the cluster name, which ensures two clusters with different names have separate internal address spaces.
In order to display hepto network configuration, which is useful for later setting up the CNI and service proxy:
source env
hepto -name whatever -info
Deploying Calico
Calico is a powerful CNI that bundles an iptables-based CNI, an eBPF path for kube-proxy-less behavior, plus network policy implementations.
Currently (v3.25.0
), Calico support for IPv6 eBPF is limited. Until it is
properly implemented (active work is in progress), Calico must be deployed with
iptables support on top of kube-proxy
.
Start by deploying kube-proxy
, see and adapt docs/kube-proxy.yaml
, especially
API server address and cluster CIDR as provided by hepto info
, then:
export KUBECONFIG=/var/lib/hepto/mymaster/kubeconfig
# This config file is customized for the example cluster, change values
# according to your cluster network config
kubectl create -f docs/kube-proxy.yaml
Once stabilized, service routing should be setup in the cluster, time to enable Calico using the Tigera operator:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml
Then enable Calico using the dedicated CRD. This configuration enables VXLAN on top of the mesh VPN. This is the most tested and probably most reliable Calico mode, despite a slight overhead:
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
// Enforce the use of an IPv6 Docker registry for Calico images
registry: registry.ipv6.docker.com
calicoNetwork:
ipPools:
- blockSize: 122
// This is the pod CIDR again
cidr: fdfd:9432:32b3:200::/56
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
Deploying Cilium
Cilium has limited but existing support for IPv6-only mode in its eBPF path. It still requires some iptables magic for masquerading, that is not supported by the eBPF programs.
Start by creating a Helm values file for the official Cilium chart:
// APIserver address, get it from hepto -info
k8sServiceHost: "fdfd:9432:32b3:100:302:1a88:9151:9627"
k8sServicePort: 6443
cgroup:
hostRoot: /sys/fs/cgroup
// Enable ipv6-only mode
ipv6:
enabled: true
ipv4:
enabled: false
// VTEP tunnel is not supported by Cilium without IPv4, switch to
// direct node routing instead
tunnel: disabled
autoDirectNodeRoutes: true
// Do not deploy default iptables rules, and diable kube-proxy
// support in favor of eBPF-only service path, except for host
// traffic which is not supported in IPv6-only mode
installIptablesRules: false
l7Proxy: false
bpf:
hostLegacyRouting: true
// Tell Cilium to prefer the wireguard interface over public
// network
devices: wg0
// IPv6-only supports cluster pool CIDR, get it from
// hepto-info, /104 is the largest supported CIDR size
ipam:
operator:
clusterPoolIPv6PodCIDRList: "fdfd:9432:32b3:200::/104"
ipv6NativeRoutingCIDR: "fdfd:9432:32b3:200::/56"
Then deploy the official Helm chart:
helm install cilium/cilium -f values.yaml
Development
Hepto is being developped as part of an ongoing effort to provide decent infrastructure to small-scale geodistributed hosting providers. If your use case fits our philosophy, any help is welcome in maintaining the project.
Development requirements are:
- git (both for cloning and build configuration)
- go >= 1.19
- ld for static linking
- libseccomp headers
# On Ubuntu/Mint
sudo apt-get install libseccomp-dev build-essential pkg-config
Start by cloning the repository, then build the project:
# Some build configuration is declared as environmemt variables
source env
# Build a single binary
go build -tags "$TAGS" -ldflags "$LDFLAGS" ./cmd/hepto.go
For details about the architecture, see design documentation.