Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
Hepto
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Terraform modules
Monitor
Service Desk
Analyze
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
reminec
Hepto
Commits
f4a4b45f
Commit
f4a4b45f
authored
2 years ago
by
kaiyou
Browse files
Options
Downloads
Patches
Plain Diff
Add documentation about setting up a CNI
parent
be4fe505
No related branches found
Branches containing commit
No related tags found
Tags containing commit
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.md
+102
-3
102 additions, 3 deletions
README.md
with
102 additions
and
3 deletions
README.md
+
102
−
3
View file @
f4a4b45f
...
@@ -48,6 +48,9 @@ Once the cluster has stabilized, you may run kubectl inside the node:
...
@@ -48,6 +48,9 @@ Once the cluster has stabilized, you may run kubectl inside the node:
hepto kubectl --cluster lab --name mynode -- get node
hepto kubectl --cluster lab --name mynode -- get node
```
```
At this point and before setting up a CNI and exposing the kubernetes API,
`hepto kubectl`
is the only supported way to access the cluster.
## Multi-node cluster
## Multi-node cluster
When running a multi-node cluster, at least one node in the cluster should have a stable
When running a multi-node cluster, at least one node in the cluster should have a stable
...
@@ -83,12 +86,108 @@ hepto start \
...
@@ -83,12 +86,108 @@ hepto start \
--anchors dead::beef
--anchors dead::beef
```
```
##
Adding a CNI
##
Setting up networking
Hepto comes without a CNI, it is tested against both Calico and Cilium.
Hepto comes without a CNI, it is tested against both Calico and Cilium.
We will later provide detailed instructions about properly setting up the CNI in
### Deploying Calico
a VPN-encapsulated IPv6-only network.
Calico is a powerful CNI that bundles an iptables-based CNI, an eBPF path for
kube-proxy-less behavior, plus network policy implementations.
Currently (
`v3.25.0`
), Calico support for IPv6 eBPF is limited. Until it is
properly implemented (active work is in progress), Calico must be deployed with
iptables support on top of
`kube-proxy`
.
Start by deploying
`kube-proxy`
, see and adapt
`docs/kube-proxy.yaml`
, especially
API server address and cluster CIDR as provided by
`hepto info`
, then:
```
hepto kubectl --cluster lab --node mynode -- create -f - < kube-proxy.yaml
```
Once stabilized, service routing should be setup in the cluster, time to enable
Calico using the Tigera operator:
```
curl curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml \
| hepto kubectl --cluster lab --node mynode -- create -f -
```
Then enable Calico using the dedicated CRD. This configuration enables VXLAN on
top of the mesh VPN. This is the most tested and probably most reliable Calico
mode, despite a slight overhead:
```
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
// Enforce the use of an IPv6 Docker registry for Calico images
registry: registry.ipv6.docker.com
calicoNetwork:
ipPools:
- blockSize: 122
// This is the pod CIDR again
cidr: fdfd:9432:32b3:200::/56
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
```
### Deploying Cilium
Cilium has limited but existing support for IPv6-only mode in its eBPF path.
It still requires some iptables magic for masquerading, that is not supported by
the eBPF programs.
Start by creating a Helm values file for the official Cilium chart:
```
// APIserver address, get it from hepto info
k8sServiceHost: "fdfd:9432:32b3:100:302:1a88:9151:9627"
k8sServicePort: 6443
cgroup:
hostRoot: /sys/fs/cgroup
// Enable ipv6-only mode
ipv6:
enabled: true
ipv4:
enabled: false
// VTEP tunnel is not supported by Cilium without IPv4, switch to
// direct node routing instead
tunnel: disabled
autoDirectNodeRoutes: true
// Do not deploy default iptables rules, and diable kube-proxy
// support in favor of eBPF-only service path, except for host
// traffic which is not supported in IPv6-only mode
installIptablesRules: false
l7Proxy: false
bpf:
hostLegacyRouting: true
// Tell Cilium to prefer the wireguard interface over public
// network
devices: wg0
// IPv6-only supports cluster pool CIDR, get it from
// hepto-info, /104 is the largest supported CIDR size
ipam:
operator:
clusterPoolIPv6PodCIDRList: "fdfd:9432:32b3:200::/104"
ipv6NativeRoutingCIDR: "fdfd:9432:32b3:200::/56"
```
Then deploy the official Helm chart:
```
helm template cilium/cilium -f values.yaml | hepto kubectl --cluster lab --node mynode -- create -f -
```
## Development
## Development
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment