Hepto is a Kubernetes distribution for geo-deployments
Features
Hepto is a very opinionated Kubernetes distribution. Here are some design choices, which we think are its features of interest:
- single binary, no dependency except for the kernel,
- automatic cluster discovery based on gossip protocols,
- automatic cluster PKI setup,
- auto-containerization of the k8s stack,
- IPv6-only, from the ground up.
Getting started
⚠️ Hepto is still highly experimental, so quick disclaimer: please do not host anything critical backed by hepto at the moment, please backup your data, please file issues if you find bugs.
First prepare your node, you will need:
- a public network interface with available IPv6 and a gateway (Hepto will start its own container and separate network stack from the host, in order for the Kubernetes controlplane to be isolated),
- a recent-enough kernel (tested for >= 5.19),
- a secretly generated security key like
openssl rand -hex 24
, that will be shared among nodes and secure discovery.
In a general environment file, shared by all nodes, write the following config:
export HEPTO_CLUSTER=mycluster
export HEPTO_KEY=deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef
Note that all Hepto options are available through enviromnemt variables, see
hepto -help
for details.
Run your first single-node cluster
To start your first cluster, run hepto
in master
mode, which embeds both a
master (apiserver, controller manager, etc.) and a node (kubelet, containerd)
inside single process.
source docs/example.env
hepto \
# The node name, data will be stored in /var/lib/mycluster/mymaster
-name mymaster \
# The role, here a master only
-role full \
# The host main network interface
-iface eth0
# Optional, IPv6 for the insance, will autoconfigure based on RA if unset
-ip 2a00::dead:beef/64
# Optional, IPv6 for the gateway, will autoconfigure based on RA if unset
-gw 2a00::1
# Optional, IPv6 for dns servers, will use Cloudflare DNS if unset
-dns 2a00:1234::1234
All data, including a useable kubeconfig
, will be stored in <data>/<cluster name>/<node name>
.
Once the cluster has stabilized, you may run kubectl against the apiserver:
export KUBECONFIG=/var/lib/mycluster/mymaster/kubeconfig
kubectl get all -A
Multi-node cluster
When running a multi-node cluster, at least one node in the cluster must have a stable IP address for discovery. We call that node an anchor: any other node is able to join the cluster knowing the cluster name, anchor IP address and the cluster security key.
It is customary but not mandatory for the anchor to also be the cluster master. Clusters running over unstable networks may also have multiple anchors for safety (anchors are only required when bootstraping nodes).
Start by running a master node anywhere:
source docs/example.env
hepto -name mymaster -role master -iface eth0 -ip 2a00::dead:beef/64
Then on every other node, use the anchor IP address for discovery:
source docs/example.env
hepto -name mynode1 -role node -iface eth0 -anchor 2a00::dead:beef
Bootstraping the cluster
Hepto comes with no battery. After starting the cluster, there is not even a CNI installed for setting up your pods networking. Hepto is tested with the following base stack for bootstraping:
-
kube-proxy
for routing services, usingiptables
-
calico
as a CNI, in IPv6iptables
mode -
coredns
as a cluster DNS
The repository comes with a bootstraping Helm chart in helm/
, which takes cluster config
as parameters and installs these components. The hepto -info
command outputs cluster
info as Yaml, compatible with the bootstraping chart.
hepto -iface eth0 -name myfull -info > cluster-info.yaml
helm install hepto ./helm -f cluster-info.yaml
Deploying a cluster on many nodes using Ansible
Existing nodes
This repository provides an ansible role to deploy Hepto on a node. Start with an inventory file listing your nodes, and providing some variables, also add nodes to the master, anchor and public groups.
See ansible/inventories/sample-deploy.yaml
for an example.
Then run Ansible:
cd ansible && ansible-playbook -i inventories/inventory.yaml playbooks/deploy.yaml
This will deploy Hepto on the given nodes and bootstrap the cluster using the bootstrap helm chart.
Cloud nodes
If you wish to use cloud provisioning to deploy the nodes in the first place (currently supporting Hetzner only), a separate playbook provides automatic deployment of your nodes.
See ansible/inventories/sample-cloud.yaml
for an example inventory file.
Then run Ansible:
cd ansible && ansible-playbook -i inventories/inventory.yaml playbooks/cloud.yaml
This will create cloud nodes, setup Hepto, then bootstrap the cluster.
Development
Hepto is being developped as part of an ongoing effort to provide decent infrastructure to small-scale geodistributed hosting providers. If your use case fits our philosophy, any help is welcome in maintaining the project.
Development requirements are:
-
git
(both for cloning and build configuration) -
go
>= 1.19 -
ld
for static linking -
libseccomp
headers
# On Ubuntu/Mint
sudo apt-get install libseccomp-dev build-essential pkg-config
Start by cloning the repository, then build the project:
# Some build configuration is declared as environmemt variables
source env
# Build a single binary
go build -tags "$TAGS" -ldflags "$LDFLAGS" ./cmd/hepto.go
For details about the architecture, see design documentation.
Embedding Flux
Our current design decision is to setup Flux in the bootstrap Helm chart. Since Flux does not provide an independent helm chart, we currently embed Flux manifests directly.
Those are generated automatically from Flux CLI:
flux install --export | yq ea '[.] | filter(.kind=="CustomResourceDefinition") | .[] | split_doc' > helm/crds/flux.yaml
flux install --namespace xxxxxx --export | yq ea '[.] | filter(.kind!="CustomResourceDefinition") | .[] | split_doc' | sed 's/xxxxxx/{{ .Values.flux.namespace }}/g' > helm/templates/flux.yaml
*[CNI]: Container Network Interface