Skip to content
Snippets Groups Projects
Forked from ACIDES / Hepto
251 commits behind, 4 commits ahead of the upstream repository.

Hepto is a kubernetes distribution for geodistributed desployments

Features

Hepto is a very opinionated kubernetes distribution. Here are some design choices, which we think are its features of interest:

  • single binary, no dependency except for the kernel,
  • automatic cluster discovery based on gossip protocols,
  • automatic cluster PKI setup,
  • auto-containerization of the k8s stack,
  • IPv6-only, from the ground up.

Getting started

Hepto is still highly experimental, so quick disclaimer: please do not host anything critical backed by hepto at the moment, please backup your data, please file issues if you find bugs.

First prepare your node, you will need:

  • a public network interface with available IPv6 and a gateway,

  • a recent-enough kernel (tested for >= 5.19),

    root@node:~# uname -r 
    5.19.0-32-generic # If < 5.19 then upgrade kernel
    root@node:~# apt-get install linux-modules-5.19.0-32-generic linux-headers-5.19.0-32-generic linux-image-5.19.0-32-generic
    root@node:~# reboot # if kernel upgraded
  • a secretly generated security key like (defaults to highly unsafe null key).

    root@node:~# openssl rand -hex 32
    6373b26fa070a6c605580bfea9de56a4b7a180b07275370dcf596079d5136d82
  • Following modules activated :

    # If some missing, take a look at https://breest.io/_/preparing-server-for-rke-based-kubernetes/
    root@node:~# for module in nf_conntrack ip_tables iptable_filter iptable_nat iptable_mangle ip6_tables ip6table_filter ip6table_nat ip6table_mangle
    do
      modprobe -v $module;
      echo "$module" > /etc/modules-load.d/$module.conf;
    done
  • Check/Adapt some systctl params like

    root@node:~# sysctl -w net/netfilter/nf_conntrack_max 131072

To start hepto as a single full node (both master and kubelet), run the following:

hepto start \
  # Cluster name, defaults to "hepto"
  --cluster lab
  # The hex-encoded cluster security key
  --key deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef
  # Node name, should be unique across the cluster
  --name mynode
  # Main network interface
  --iface eth0
  # Network settings, optional (defaults to IPv6 autodiscovery)
  --ip dead::beef
  --gw beef::dead
  --dns dead:be::ef
  # Node role, "full", "master" or "node"
  --role full

Once the cluster has stabilized, you may run kubectl inside the node:

hepto kubectl --cluster lab --name mynode -- get node

At this point and before setting up a CNI and exposing the kubernetes API, hepto kubectl is the only supported way to access the cluster.

Multi-node cluster

When running a multi-node cluster, at least one node in the cluster should have a stable IP address for discovery. We call that node an anchor: any other node is able to join the cluster by knowing the cluster name, anchor IP address and the cluster security key.

It is customary but not mandatory for the anchor to also be the cluster master. Clusters running over unstable networks may also have multiple anchors for safety (anchors are only required when bootstraping nodes).

Start by running a master node anywhere:

hepto start \
  --name master \
  --key dead[...]beef \
  --iface eth0 \
  --role master \
  # Use a fixed ip address so the node can act as an anchor
  # Use IPv6 autodiscovery for default gateway and name servers
  --ip dead::beef
  

Then on every other node, use the anchor IP address for discovery:

hepto start \
  --name node1 \
  --key dead[...]beef \
  --iface enp0s20 \
  --role node \
  --anchors dead::beef

Setting up networking

Hepto comes without a CNI, it is tested against both Calico and Cilium.

Deploying Calico

Calico is a powerful CNI that bundles an iptables-based CNI, an eBPF path for kube-proxy-less behavior, plus network policy implementations.

Currently (v3.25.0), Calico support for IPv6 eBPF is limited. Until it is properly implemented (active work is in progress), Calico must be deployed with iptables support on top of kube-proxy.

Start by deploying kube-proxy, see and adapt docs/kube-proxy.yaml, especially API server address and cluster CIDR as provided by hepto info, then:

hepto kubectl --cluster lab --node mynode -- create -f - < kube-proxy.yaml

Once stabilized, service routing should be setup in the cluster, time to enable Calico using the Tigera operator:

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml \
 | hepto kubectl --cluster lab --node mynode -- create -f -

Then enable Calico using the dedicated CRD. This configuration enables VXLAN on top of the mesh VPN. This is the most tested and probably most reliable Calico mode, despite a slight overhead:

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Enforce the use of an IPv6 Docker registry for Calico images
  registry: registry.ipv6.docker.com
  calicoNetwork:
    ipPools:
    - blockSize: 122
      # This is the pod CIDR again
      cidr: fdfd:9432:32b3:200::/56
      encapsulation: VXLAN
      natOutgoing: Enabled
      nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer 
metadata: 
  name: default
spec: {}

Deploying Cilium

Cilium has limited but existing support for IPv6-only mode in its eBPF path. It still requires some iptables magic for masquerading, that is not supported by the eBPF programs.

Start by creating a Helm values file for the official Cilium chart:

# APIserver address, get it from hepto info
k8sServiceHost: "fdfd:9432:32b3:100:302:1a88:9151:9627"
k8sServicePort: 6443
cgroup:
  hostRoot: /sys/fs/cgroup
# Enable ipv6-only mode
ipv6:
  enabled: true
ipv4:
  enabled: false
# VTEP tunnel is not supported by Cilium without IPv4, switch to
# direct node routing instead
tunnel: disabled
autoDirectNodeRoutes: true
# Do not deploy default iptables rules, and diable kube-proxy
# support in favor of eBPF-only service path, except for host
# traffic which is not supported in IPv6-only mode
installIptablesRules: false
l7Proxy: false
bpf:
  hostLegacyRouting: true
# Tell Cilium to prefer the wireguard interface over public
# network
devices: wg0
# IPv6-only supports cluster pool CIDR, get it from
# hepto-info, /104 is the largest supported CIDR size
ipam:
  operator:
    clusterPoolIPv6PodCIDRList: "fdfd:9432:32b3:200::/104"
ipv6NativeRoutingCIDR: "fdfd:9432:32b3:200::/56"

Then deploy the official Helm chart:

helm template cilium/cilium -f values.yaml | hepto kubectl --cluster lab --node mynode -- create -f -

Development

Hepto is being developped as part of an ongoing effort to provide decent infrastructure to small-scale geodistributed hosting providers. If your use case fits our philosophy, any help is welcome in maintaining the project.

Development requirements are:

  • git (both for cloning and build configuration)
  • go >= 1.19
  • ld for static linking
  • libseccomp headers
# On Ubuntu/Mint
sudo apt-get install libseccomp-dev build-essential pkg-config

Start by cloning the repository, then build the project:

# Some build configuration is declared as environmemt variables
source env
# Build a single binary
go build -tags "$TAGS" -ldflags "$LDFLAGS" ./cmd/hepto.go

For details about the architecture, see design documentation.