Skip to content
Snippets Groups Projects
README.md 6.21 KiB
Newer Older
# **Hepto** is a **Kubernetes** distribution for geo-deployments
## Features
kaiyou's avatar
kaiyou committed

**Hepto** is a *very* opinionated **Kubernetes** distribution. Here are some design choices,
which we think are its features of interest:

- single binary, no dependency except for the kernel,
- automatic cluster discovery based on gossip protocols,
- automatic cluster PKI setup,
- auto-containerization of the k8s stack,
- IPv6-only, from the ground up.

## Getting started

⚠️ **Hepto** is still highly experimental, so quick disclaimer: please do not host anything
critical backed by hepto at the moment, please backup your data, please file issues
if you find bugs.

First prepare your node, you will need:
- a public network interface with available IPv6 and a gateway (**Hepto** will start
  its own container and separate network stack from the host, in order for the **Kubernetes**
kaiyou's avatar
kaiyou committed
  controlplane to be isolated),
- a recent-enough kernel (tested for >= 5.19),
kaiyou's avatar
kaiyou committed
- a secretly generated security key like `openssl rand -hex 24`, that will be shared
  among nodes and secure discovery.
kaiyou's avatar
kaiyou committed
In a general environment file, shared by all nodes, write the following config:
kaiyou's avatar
kaiyou committed
export HEPTO_CLUSTER=mycluster
export HEPTO_KEY=deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef
Note that all **Hepto** options are available through enviromnemt variables, see
kaiyou's avatar
kaiyou committed
`hepto -help` for details.

kaiyou's avatar
kaiyou committed
## Run your first single-node cluster
kaiyou's avatar
kaiyou committed

To start your first cluster, run `hepto` in `full` mode, which embeds both a
kaiyou's avatar
kaiyou committed
`master` (apiserver, controller manager, etc.) and a `node` (kubelet, containerd)
inside single process.
source docs/example.env
kaiyou's avatar
kaiyou committed
hepto \
 # The node name, data will be stored in /var/lib/mycluster/mymaster
 -name myfull \
kaiyou's avatar
kaiyou committed
 # The role, here a master only
 -role full \
kaiyou's avatar
kaiyou committed
 # The host main network interface
 -iface eth0
 # Optional, IPv6 for the insance, will autoconfigure based on RA if unset
 -ip 2a00::dead:beef/64
 # Optional, IPv6 for the gateway, will autoconfigure based on RA if unset
 -gw 2a00::1
 # Optional, IPv6 for dns servers, will use Cloudflare DNS if unset
 -dns 2a00:1234::1234
All data, including a useable `kubeconfig``, will be stored in `<data>/<cluster name>/<node name>`.
kaiyou's avatar
kaiyou committed
Once the cluster has stabilized, you may run kubectl against the apiserver:

kaiyou's avatar
kaiyou committed
export KUBECONFIG=/var/lib/mycluster/myfull/kubeconfig
kubectl get all -A
kaiyou's avatar
kaiyou committed
```
## Multi-node cluster

kaiyou's avatar
kaiyou committed
When running a multi-node cluster, at least one node in the cluster must have a stable
IP address for discovery. We call that node an anchor: any other node is able to join the
kaiyou's avatar
kaiyou committed
cluster knowing the cluster name, anchor IP address and the cluster security key.

It is customary but not mandatory for the anchor to also be the cluster master. Clusters
running over unstable networks may also have multiple anchors for safety (anchors are only
required when bootstraping nodes).

Start by running a master node anywhere:

source docs/example.env
kaiyou's avatar
kaiyou committed
hepto -name mymaster -role master -iface eth0 -ip 2a00::dead:beef/64  
```

Then on every other node, use the anchor IP address for discovery:

source docs/example.env
kaiyou's avatar
kaiyou committed
hepto -name mynode1 -role node -iface eth0 -anchor 2a00::dead:beef
kaiyou's avatar
kaiyou committed
## Bootstraping the cluster
**Hepto** comes with no battery. After starting the cluster, there is not even a CNI installed
for setting up your pods networking. **Hepto** is tested with the following base stack for
kaiyou's avatar
kaiyou committed
bootstraping:
- `kube-proxy` for routing services, using `iptables`
kaiyou's avatar
kaiyou committed
- `calico` as a CNI, in IPv6 `iptables` mode
- `coredns` as a cluster DNS
The repository comes with a bootstraping **Helm** chart in `helm/`, which takes cluster config
kaiyou's avatar
kaiyou committed
as parameters and installs these components. The `hepto -info` command outputs cluster
info as **Yaml**, compatible with the bootstraping chart.
kaiyou's avatar
kaiyou committed
hepto -iface eth0 -name myfull -info > cluster-info.yaml
helm install hepto ./helm -f cluster-info.yaml
## Deploying a cluster on many nodes using **Ansible**
kaiyou's avatar
kaiyou committed
### Existing nodes

This repository provides an ansible role to deploy __Hepto__ on a node. Start with an
inventory file listing your nodes, and providing some variables, also add nodes to
the **master**, **anchor** and **public** groups.
See `ansible/inventories/sample-deploy.yaml` for an example.
Then run **Ansible**:
cd ansible && ansible-playbook -i inventories/inventory.yaml playbooks/deploy.yaml
This will deploy **Hepto** on the given nodes and bootstrap the cluster using the
bootstrap helm chart.
kaiyou's avatar
kaiyou committed

### Cloud nodes
kaiyou's avatar
kaiyou committed

If you wish to use cloud provisioning to deploy the nodes in the first place
(currently supporting **Hetzner** only), a separate playbook provides automatic
deployment of your nodes.

See `ansible/inventories/sample-cloud.yaml` for an example inventory file.
Then run **Ansible**:
cd ansible && ansible-playbook -i inventories/inventory.yaml playbooks/cloud.yaml
This will create cloud nodes, setup **Hepto**, then bootstrap the cluster.
kaiyou's avatar
kaiyou committed

## Development

**Hepto** is being developped as part of an ongoing effort to provide decent
infrastructure to small-scale geodistributed hosting providers. If your use case
fits our philosophy, any help is welcome in maintaining the project.

Development requirements are:
- `git` (both for cloning and build configuration)
- `go` >= 1.19
- `ld` for static linking
- `libseccomp` headers
# On Ubuntu/Mint
sudo apt-get install libseccomp-dev build-essential pkg-config
```

Start by cloning the repository, then build the project:

# Some build configuration is declared as environmemt variables
kaiyou's avatar
kaiyou committed
source env
# Build a single binary
go build -tags "$TAGS" -ldflags "$LDFLAGS" ./cmd/hepto.go
```

kaiyou's avatar
kaiyou committed
For details about the architecture, see [design documentation](docs/design.md).
kaiyou's avatar
kaiyou committed

### Embedding **Flux**
kaiyou's avatar
kaiyou committed

Our current design decision is to setup **Flux** in the bootstrap **Helm** chart. Since
**Flux** does not provide an independent helm chart, we currently embed Flux manifests
kaiyou's avatar
kaiyou committed
directly.

Those are generated automatically from Flux CLI:

kaiyou's avatar
kaiyou committed
flux install --export | yq ea '[.] | filter(.kind=="CustomResourceDefinition") | .[] | split_doc' > helm/crds/flux.yaml
flux install --namespace xxxxxx --export | yq ea '[.] | filter(.kind!="CustomResourceDefinition") | .[] | split_doc' | sed 's/xxxxxx/{{ .Values.flux.namespace }}/g' > helm/templates/flux.yaml
```

*[CNI]: Container Network Interface