Newer
Older
# **Hepto** is a **Kubernetes** distribution for geo-deployments
**Hepto** is a *very* opinionated **Kubernetes** distribution. Here are some design choices,
which we think are its features of interest:
- single binary, no dependency except for the kernel,
- automatic cluster discovery based on gossip protocols,
- automatic cluster PKI setup,
- auto-containerization of the k8s stack,
- IPv6-only, from the ground up.
## Getting started
⚠️ **Hepto** is still highly experimental, so quick disclaimer: please do not host anything
critical backed by hepto at the moment, please backup your data, please file issues
if you find bugs.
First prepare your node, you will need:
- a public network interface with available IPv6 and a gateway (**Hepto** will start
its own container and separate network stack from the host, in order for the **Kubernetes**
- a recent-enough kernel (tested for >= 5.19),
- a secretly generated security key like `openssl rand -hex 24`, that will be shared
among nodes and secure discovery.
In a general environment file, shared by all nodes, write the following config:
export HEPTO_CLUSTER=mycluster
export HEPTO_KEY=deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef
Note that all **Hepto** options are available through enviromnemt variables, see
To start your first cluster, run `hepto` in `full` mode, which embeds both a
`master` (apiserver, controller manager, etc.) and a `node` (kubelet, containerd)
inside single process.
hepto \
# The node name, data will be stored in /var/lib/mycluster/mymaster
# The host main network interface
-iface eth0
# Optional, IPv6 for the insance, will autoconfigure based on RA if unset
-ip 2a00::dead:beef/64
# Optional, IPv6 for the gateway, will autoconfigure based on RA if unset
-gw 2a00::1
# Optional, IPv6 for dns servers, will use Cloudflare DNS if unset
-dns 2a00:1234::1234
All data, including a useable `kubeconfig``, will be stored in `<data>/<cluster name>/<node name>`.
Once the cluster has stabilized, you may run kubectl against the apiserver:
export KUBECONFIG=/var/lib/mycluster/myfull/kubeconfig
kubectl get all -A
When running a multi-node cluster, at least one node in the cluster must have a stable
IP address for discovery. We call that node an anchor: any other node is able to join the
cluster knowing the cluster name, anchor IP address and the cluster security key.
It is customary but not mandatory for the anchor to also be the cluster master. Clusters
running over unstable networks may also have multiple anchors for safety (anchors are only
required when bootstraping nodes).
Start by running a master node anywhere:
hepto -name mymaster -role master -iface eth0 -ip 2a00::dead:beef/64
```
Then on every other node, use the anchor IP address for discovery:
hepto -name mynode1 -role node -iface eth0 -anchor 2a00::dead:beef
**Hepto** comes with no battery. After starting the cluster, there is not even a CNI installed
for setting up your pods networking. **Hepto** is tested with the following base stack for
- `kube-proxy` for routing services, using `iptables`
- `calico` as a CNI, in IPv6 `iptables` mode
- `coredns` as a cluster DNS
The repository comes with a bootstraping **Helm** chart in `helm/`, which takes cluster config
as parameters and installs these components. The `hepto -info` command outputs cluster
info as **Yaml**, compatible with the bootstraping chart.
hepto -iface eth0 -name myfull -info > cluster-info.yaml
helm install hepto ./helm -f cluster-info.yaml
## Deploying a cluster on many nodes using **Ansible**
This repository provides an ansible role to deploy __Hepto__ on a node. Start with an
inventory file listing your nodes, and providing some variables, also add nodes to
the **master**, **anchor** and **public** groups.
See `ansible/inventories/sample-deploy.yaml` for an example.
cd ansible && ansible-playbook -i inventories/inventory.yaml playbooks/deploy.yaml
This will deploy **Hepto** on the given nodes and bootstrap the cluster using the
If you wish to use cloud provisioning to deploy the nodes in the first place
(currently supporting **Hetzner** only), a separate playbook provides automatic
See `ansible/inventories/sample-cloud.yaml` for an example inventory file.
cd ansible && ansible-playbook -i inventories/inventory.yaml playbooks/cloud.yaml
This will create cloud nodes, setup **Hepto**, then bootstrap the cluster.
**Hepto** is being developped as part of an ongoing effort to provide decent
infrastructure to small-scale geodistributed hosting providers. If your use case
fits our philosophy, any help is welcome in maintaining the project.
Development requirements are:
- `git` (both for cloning and build configuration)
- `go` >= 1.19
- `ld` for static linking
- `libseccomp` headers
# On Ubuntu/Mint
sudo apt-get install libseccomp-dev build-essential pkg-config
```
Start by cloning the repository, then build the project:
# Some build configuration is declared as environmemt variables
# Build a single binary
go build -tags "$TAGS" -ldflags "$LDFLAGS" ./cmd/hepto.go
```
For details about the architecture, see [design documentation](docs/design.md).
Our current design decision is to setup **Flux** in the bootstrap **Helm** chart. Since
**Flux** does not provide an independent helm chart, we currently embed Flux manifests
directly.
Those are generated automatically from Flux CLI:
flux install --export | yq ea '[.] | filter(.kind=="CustomResourceDefinition") | .[] | split_doc' > helm/crds/flux.yaml
flux install --namespace xxxxxx --export | yq ea '[.] | filter(.kind!="CustomResourceDefinition") | .[] | split_doc' | sed 's/xxxxxx/{{ .Values.flux.namespace }}/g' > helm/templates/flux.yaml
```