Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
Hepto
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Terraform modules
Monitor
Service Desk
Analyze
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
lutangar
Hepto
Commits
5b809c0a
Commit
5b809c0a
authored
1 year ago
by
lutangar
Browse files
Options
Downloads
Patches
Plain Diff
docs: makes the README shinier
parent
b84401a8
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Pipeline
#29388
failed
1 year ago
Stage: build
Stage: test
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.md
+42
-40
42 additions, 40 deletions
README.md
with
42 additions
and
40 deletions
README.md
+
42
−
40
View file @
5b809c0a
# Hepto is a
k
ubernetes distribution for geo
de
s
ployments
#
**
Hepto
**
is a
**K
ubernetes
**
distribution for geo
-
deployments
## Features
Hepto is a
*very*
opinionated
k
ubernetes distribution. Here are some design choices,
**
Hepto
**
is a
*very*
opinionated
**K
ubernetes
**
distribution. Here are some design choices,
which we think are its features of interest:
-
single binary, no dependency except for the kernel,
...
...
@@ -13,13 +13,13 @@ which we think are its features of interest:
## Getting started
Hepto is still highly experimental, so quick disclaimer: please do not host anything
⚠️
**
Hepto
**
is still highly experimental, so quick disclaimer: please do not host anything
critical backed by hepto at the moment, please backup your data, please file issues
if you find bugs.
First prepare your node, you will need:
-
a public network interface with available IPv6 and a gateway (
h
epto will start
its own container and separate network stack from the host for the
k
ubernetes
-
a public network interface with available IPv6 and a gateway (
**H
epto
**
will start
its own container and separate network stack from the host
, in order
for the
**K
ubernetes
**
controlplane to be isolated),
-
a recent-enough kernel (tested for >= 5.19),
-
a secretly generated security key like
`openssl rand -hex 24`
, that will be shared
...
...
@@ -27,12 +27,12 @@ First prepare your node, you will need:
In a general environment file, shared by all nodes, write the following config:
```
ba
sh
```
sh
export
HEPTO_CLUSTER
=
mycluster
export
HEPTO_KEY
=
deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef
```
Note that all
h
epto options are available through enviromnemt variables, see
Note that all
**H
epto
**
options are available through enviromnemt variables, see
`hepto -help`
for details.
## Run your first single-node cluster
...
...
@@ -41,7 +41,7 @@ To start your first cluster, run `hepto` in `full` mode, which embeds both a
`master`
(apiserver, controller manager, etc.) and a
`node`
(kubelet, containerd)
inside single process.
```
```
sh
source
docs/example.env
hepto
\
# The node name, data will be stored in /var/lib/mycluster/mymaster
...
...
@@ -58,10 +58,10 @@ hepto \
-dns
2a00:1234::1234
```
All data, including a useable kubeconfig, will be stored in
`<data>/<cluster name>/<node name>`
.
All data, including a useable
`
kubeconfig
``
, will be stored in `
<data>
/
<cluster
name
>
/
<node
name
>
`.
Once the cluster has stabilized, you may run kubectl against the apiserver:
```
```
sh
export KUBECONFIG=/var/lib/mycluster/myfull/kubeconfig
kubectl get all -A
```
...
...
@@ -78,92 +78,92 @@ required when bootstraping nodes).
Start by running a master node anywhere:
```
```
sh
source docs/example.env
hepto -name mymaster -role master -iface eth0 -ip 2a00::dead:beef/64
```
Then on every other node, use the anchor IP address for discovery:
```
```
sh
source docs/example.env
hepto -name mynode1 -role node -iface eth0 -anchor 2a00::dead:beef
```
## Bootstraping the cluster
Hepto comes with no battery. After starting the cluster, there is not even a CNI installed
for setting up your pods networking. Hepto is tested with the following base stack for
**
Hepto
**
comes with no battery. After starting the cluster, there is not even a CNI installed
for setting up your pods networking.
**
Hepto
**
is tested with the following base stack for
bootstraping:
-
`kube-proxy`
for routin services, using
`iptables`
- `
kube-proxy
` for routin
g
services, using `
iptables
`
- `
calico
` as a CNI, in IPv6 `
iptables
` mode
- `
coredns
` as a cluster DNS
The repository comes with a bootstraping
h
elm chart in
`helm/`
,
that
takes cluster config
The repository comes with a bootstraping
**H
elm
**
chart in `
helm/
`,
which
takes cluster config
as parameters and installs these components. The `
hepto -info
` command outputs cluster
info as Yaml, compatible with the bootstraping chart.
info as
**
Yaml
**
, compatible with the bootstraping chart.
```
```
sh
hepto -iface eth0 -name myfull -info > cluster-info.yaml
helm install hepto ./helm -f cluster-info.yaml
```
## Deploying a cluster on many nodes using Ansible
## Deploying a cluster on many nodes using
**
Ansible
**
### Existing nodes
This repository provides an ansible role
for
deploy
ing h
epto on a node. Start with an
This repository provides an ansible role
to
deploy
__H
epto
__
on a node. Start with an
inventory file listing your nodes, and providing some variables, also add nodes to
the master
,
anchor and public groups.
the
**
master
**, **
anchor
**
and
**
public
**
groups.
See `
ansible/inventories/sample-deploy.yaml
` for an example.
Then run Ansible:
Then run
**
Ansible
**
:
```
```
sh
cd ansible && ansible-playbook -i inventories/inventory.yaml playbooks/deploy.yaml
```
This will deploy
h
epto on the given nodes and bootstrap the cluster using the
This will deploy
**H
epto
**
on the given nodes and bootstrap the cluster using the
bootstrap helm chart.
### Cloud nodes
If you wish to use cloud provisioning
for
deploy
ing
the nodes in the first place
(currently supporting Hetzner only), a separate playbook provides automatic
If you wish to use cloud provisioning
to
deploy the nodes in the first place
(currently supporting
**
Hetzner
**
only), a separate playbook provides automatic
deployment of your nodes.
See `
ansible/inventories/sample-cloud.yaml
` for an example inventory file.
Then run Ansible:
Then run
**
Ansible
**
:
```
```
sh
cd ansible && ansible-playbook -i inventories/inventory.yaml playbooks/cloud.yaml
```
This will create cloud nodes, setup
h
epto, then bootstrap the cluster.
This will create cloud nodes, setup
**H
epto
**
, then bootstrap the cluster.
## Development
Hepto is being developped as part of an ongoing effort to provide decent
**
Hepto
**
is being developped as part of an ongoing effort to provide decent
infrastructure to small-scale geodistributed hosting providers. If your use case
fits our philosophy, any help is welcome in maintaining the project.
Development requirements are:
-
git (both for cloning and build configuration)
-
go >= 1.19
-
ld for static linking
-
libseccomp headers
-
`
git
`
(both for cloning and build configuration)
-
`
go
`
>= 1.19
-
`
ld
`
for static linking
-
`
libseccomp
`
headers
```
```
sh
# On Ubuntu/Mint
sudo apt-get install libseccomp-dev build-essential pkg-config
```
Start by cloning the repository, then build the project:
```
```
sh
# Some build configuration is declared as environmemt variables
source env
# Build a single binary
...
...
@@ -172,15 +172,17 @@ go build -tags "$TAGS" -ldflags "$LDFLAGS" ./cmd/hepto.go
For details about the architecture, see [design documentation](docs/design.md).
### Embedding Flux
### Embedding
**
Flux
**
Our current design decision is to setup Flux in the bootstrap
h
elm chart. Since
Flux does not provide an independent helm chart, we currently embed Flux manifests
Our current design decision is to setup
**
Flux
**
in the bootstrap
**H
elm
**
chart. Since
**
Flux
**
does not provide an independent helm chart, we currently embed Flux manifests
directly.
Those are generated automatically from Flux CLI:
```
```
sh
flux install --export | yq ea '[.] | filter(.kind=="CustomResourceDefinition") | .[] | split_doc' > helm/crds/flux.yaml
flux install --namespace xxxxxx --export | yq ea '[.] | filter(.kind!="CustomResourceDefinition") | .[] | split_doc' | sed 's/xxxxxx/{{ .Values.flux.namespace }}/g' > helm/templates/flux.yaml
``
`
*
[CNI]: Container Network Interface
\ No newline at end of file
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment