Kubernetes on ODROID N2 cluster

Post Reply
evrflx
Posts: 10
Joined: Fri Apr 05, 2019 4:16 am
languages_spoken: english
ODROIDs: 4 C2, 2 XU4, 5 N2
Has thanked: 0
Been thanked: 2 times
Contact:

Kubernetes on ODROID N2 cluster

Unread post by evrflx » Tue May 21, 2019 4:59 pm

I successfully created a kubernetes cluster with ODROID N2 on ArchLinux ARM.
In case someone would like to benefit from the learnings, here is my manual: https://www.trion.de/news/2019/05/06/ku ... id-n2.html

Note: The manual kernel compilation is no longer required, the ArchLinux ARM config was fixed.
These users thanked the author evrflx for the post (total 2):
mad_ady (Tue May 21, 2019 8:09 pm) • gneville (Sun Jun 09, 2019 5:31 am)

gneville
Posts: 13
Joined: Sun Jun 09, 2019 5:24 am
languages_spoken: english
ODROIDs: n2
Has thanked: 1 time
Been thanked: 0
Contact:

Re: Kubernetes on ODROID N2 cluster

Unread post by gneville » Sun Jun 09, 2019 5:31 am

Thanks evrflx for the tutorial, I've managed to setup my multi-masters and a single worker and have all the kube-system pods up and running but I'm now trying to deploy an ingress controller and I'm having issues. Any suggestions on where I'm going wrong?

I have Arch Linux with Kernel 4.9.177-3

Code: Select all

[gneville@k8s-master-0 haproxy-ingress-setup]$ uname -r
4.9.177-3-ARCH
The modules listed in your blog are built:

Code: Select all

[gneville@k8s-master-0 haproxy-ingress-setup]$ zgrep XT_SET /proc/config.gz
CONFIG_NETFILTER_XT_SET=m

[gneville@k8s-master-0 haproxy-ingress-setup]$ zgrep CONFIG_NETFILTER_XTABLES /proc/config.gz
CONFIG_NETFILTER_XTABLES=m

[gneville@k8s-master-0 haproxy-ingress-setup]$ zgrep CGROUP_PIDS /proc/config.gz
CONFIG_CGROUP_PIDS=y
All the nodes are ready and all pods in kube-system are running:

Code: Select all

[gneville@k8s-master-0 haproxy-ingress-setup]$ kubectl get nodes
NAME           STATUS   ROLES    AGE    VERSION
k8s-master-0   Ready    master   101m   v1.14.1
k8s-master-1   Ready    master   79m    v1.14.1
k8s-master-2   Ready    master   57m    v1.14.1
k8s-worker-0   Ready    <none>   45m    v1.14.1

[gneville@k8s-master-0 haproxy-ingress-setup]$ kubectl -n kube-system get pods
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-skbz5                1/1     Running   1          102m
coredns-fb8b8dccf-w42dg                1/1     Running   1          102m
etcd-k8s-master-0                      1/1     Running   2          101m
etcd-k8s-master-1                      1/1     Running   0          79m
etcd-k8s-master-2                      1/1     Running   0          57m
kube-apiserver-k8s-master-0            1/1     Running   2          101m
kube-apiserver-k8s-master-1            1/1     Running   0          79m
kube-apiserver-k8s-master-2            1/1     Running   0          57m
kube-controller-manager-k8s-master-0   1/1     Running   3          101m
kube-controller-manager-k8s-master-1   1/1     Running   0          79m
kube-controller-manager-k8s-master-2   1/1     Running   0          57m
kube-proxy-chptw                       1/1     Running   0          79m
kube-proxy-qjwsw                       1/1     Running   0          57m
kube-proxy-s7d5q                       1/1     Running   2          102m
kube-proxy-xdqbs                       1/1     Running   0          46m
kube-scheduler-k8s-master-0            1/1     Running   3          101m
kube-scheduler-k8s-master-1            1/1     Running   0          79m
kube-scheduler-k8s-master-2            1/1     Running   0          57m
weave-net-59zwf                        2/2     Running   4          98m
weave-net-7bpqr                        2/2     Running   0          57m
weave-net-fwn2q                        2/2     Running   1          79m
weave-net-h6mvw                        2/2     Running   1          46m

I've tried deploying both haproxy and nginx controllers e.g.:

Code: Select all

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
However they never start, checking the state I see the below:

Code: Select all

kubectl -n ingress-nginx describe pod nginx-ingress-controller-5694ccb578-lxcq7

Warning  FailedCreatePodSandBox  13s (x4 over 55s)  kubelet, k8s-worker-0  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to get network status for pod sandbox k8s_nginx-ingress-controller-5694ccb578-lxcq7_ingress-nginx_9c659f42-8a2a-11e9-b618-001e06420152_0(a9182081e4c6292c8f722cb338c010eb85d3c393a28494d2f881e09591ee52af): Unexpected command output RTNETLINK answers: No such device
Device "eth0" does not exist.
 with error: exit status 1

evrflx
Posts: 10
Joined: Fri Apr 05, 2019 4:16 am
languages_spoken: english
ODROIDs: 4 C2, 2 XU4, 5 N2
Has thanked: 0
Been thanked: 2 times
Contact:

Re: Kubernetes on ODROID N2 cluster

Unread post by evrflx » Sun Jun 09, 2019 8:37 pm

Thanks for checking out the kubernetes setup!

Could you verify that you have added the kernel modules for automatic loading after reboot and activated the modules for the current session:

Code: Select all

$ sudo sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
$ sudo sh -c 'echo "xt_set" > /etc/modules-load.d/xt_set.conf'
$ sudo modprobe br_netfilter xt_set

gneville
Posts: 13
Joined: Sun Jun 09, 2019 5:24 am
languages_spoken: english
ODROIDs: n2
Has thanked: 1 time
Been thanked: 0
Contact:

Re: Kubernetes on ODROID N2 cluster

Unread post by gneville » Mon Jun 10, 2019 4:03 am

I've built the cluster again but with just one master and one worker and confirmed that the modules are set to load on boot and I've performed a modprobe before starting kubeadm, but no change:




Master:

Code: Select all

[gneville@k8s-master-0 ~]$ cat /etc/modules-load.d/br_netfilter.conf
br_netfilter
[gneville@k8s-master-0 ~]$ cat /etc/modules-load.d/xt_set.conf
xt_set
[gneville@k8s-master-0 ~]$ sudo modprobe br_netfilter xt_set
[gneville@k8s-master-0 ~]$
Worker:

Code: Select all

[gneville@k8s-worker-0 ~]$ cat /etc/modules-load.d/br_netfilter.conf
br_netfilter
[gneville@k8s-worker-0 ~]$ cat /etc/modules-load.d/xt_set.conf
xt_set
[gneville@k8s-worker-0 ~]$ sudo modprobe br_netfilter xt_set
[gneville@k8s-worker-0 ~]$

Code: Select all

[gneville@k8s-master-0 ~]$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master-0   Ready    master   17m   v1.14.1
k8s-worker-0   Ready    <none>   8s    v1.14.1

Code: Select all

[gneville@k8s-master-0 ~]$ kubectl -n kube-system get pods
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-b8pcx                1/1     Running   1          18m
coredns-fb8b8dccf-pq79h                1/1     Running   1          18m
etcd-k8s-master-0                      1/1     Running   3          16m
kube-apiserver-k8s-master-0            1/1     Running   3          16m
kube-controller-manager-k8s-master-0   1/1     Running   3          16m
kube-proxy-5nkbh                       1/1     Running   0          59s
kube-proxy-vclng                       1/1     Running   3          18m
kube-scheduler-k8s-master-0            1/1     Running   3          16m
weave-net-6fqns                        2/2     Running   5          17m
weave-net-9n6fj                        2/2     Running   1          59s

Problem occurs when I start haproxy ingress controller deployment

Code: Select all

[gneville@k8s-master-0 ~]$ kubectl create -f https://raw.githubusercontent.com/jcmoraisjr/haproxy-ingress/master/docs/haproxy-ingress.yaml
namespace/ingress-controller created
serviceaccount/ingress-controller created
clusterrole.rbac.authorization.k8s.io/ingress-controller created
role.rbac.authorization.k8s.io/ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/ingress-controller created
rolebinding.rbac.authorization.k8s.io/ingress-controller created
deployment.extensions/ingress-default-backend created
service/ingress-default-backend created
configmap/haproxy-ingress created
daemonset.extensions/haproxy-ingress created

Code: Select all

[gneville@k8s-master-0 ~]$ kubectl -n ingress-controller get pod -o wide
NAME                                       READY   STATUS              RESTARTS   AGE   IP       NODE           NOMINATED NODE   READINESS GATES
ingress-default-backend-74755f9c44-qmfw6   0/1     ContainerCreating   0          20s   <none>   k8s-worker-0   <none>           <none>

Code: Select all

Events:
  Type     Reason                  Age   From                   Message
  ----     ------                  ----  ----                   -------
  Normal   Scheduled               47s   default-scheduler      Successfully assigned ingress-controller/ingress-default-backend-74755f9c44-qmfw6 to k8s-worker-0
  Warning  FailedCreatePodSandBox  45s   kubelet, k8s-worker-0  Failed create pod sandbox: rpc error: code = Unknown desc = failed to get network status for pod sandbox k8s_ingress-default-backend-74755f9c44-qmfw6_ingress-controller_eab3b690-8ae6-11e9-bddf-001e064212b5_0(74a4e3c919d98e9f9759487969984afe0cdb36d4ae26f3296c63a9881e8b9f01): Unexpected command output RTNETLINK answers: No such device
Device "eth0" does not exist.
 with error: exit status 1
journalctl -xe shows nothing useful

evrflx
Posts: 10
Joined: Fri Apr 05, 2019 4:16 am
languages_spoken: english
ODROIDs: 4 C2, 2 XU4, 5 N2
Has thanked: 0
Been thanked: 2 times
Contact:

Re: Kubernetes on ODROID N2 cluster

Unread post by evrflx » Mon Jun 10, 2019 5:37 am

I assume that something is still not right with your container networking, therefore the ingress does not start correctly, but the root cause must be searched at a different location, not the ingress container.
The journal on the nodes does not yield helpful data, so I suggest to check the logs of the weave pods:

Code: Select all

kubectl -n kube-system logs weave-net-6fqns 
and

Code: Select all

kubectl -n kube-system logs weave-net-9n6fj
Please check that you have the CNI package installed on the nodes.

Post Reply

Return to “Projects”

Who is online

Users browsing this forum: No registered users and 0 guests