Background
CoreOS-Kubernetes has a project for multi-node on Vagrant:
https://github.com/coreos/coreos-kubernetes https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
They have a custom cloud config for the etcd node, but none for the worker node. For those, the Vagrant file references shell scripts, which contain some cloud config but mostly Kubernetes yaml:
https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh
Objective
I'm trying to mount a NFS directory onto the coreOS worker nodes, for use in a Kubernetes pod. From what I read about Kubernetes in docs and tutorials, I want to mount on the node first as a persistent volume, like this on docker:
http://www.emergingafrican.com/2015/02/enabling-docker-volumes-and-kubernetes.html
I saw some posts that said mounting in the pod itself can be buggy, and want to avoid it by mounting on coreOS worker node first:
Kubernetes NFS volume mount fail with exit status 32
If mounting right in the pod is the standard way, just let me know and I'll do that.
Question
Are there options for customizing the cloud config for the worker node? I'm about to start hacking on that shell script, but thought I should check first. I looked through the docs but couldn't find any.
This is the coreOS cloud config I'm trying to add to the Vagrant file:
https://coreos.com/os/docs/latest/mounting-storage.html#mounting-nfs-exports
No NFS mount on coreOS is needed. Kubernetes will do it for you right in the pod:
http://kubernetes.io/v1.1/examples/nfs/README.html
Checkout nfs-busybox replication controller:
http://kubernetes.io/v1.1/examples/nfs/nfs-busybox-rc.yaml
I ran this and got it to write files to the server. That helped me debug the application. Note that even though nfs mounts do not show up when you ssh into the kubernetes node and run docker -it run /bin/bash, they are mounted in the kubernetes pod.. That's where most of my misunderstanding occurred. I guess you have to add the mount parameters to the command when doing it manually.
Additionally, my application, gogs, stored it's config files in /data . To get it to work, I first mounted the nfs to /mnt. Then, like in the kubernetes nfs-busybox example, I created a command which would copy all folders in /data to /mnt . In the replication controller yaml, under the container
node, I put a command:
command:
- sh
- -c
- 'sleep 300; cp -a /data /mnt; done'
This gave me enough time to run the initial config of my app. Then I just waited until the sleep time was up and the files were copied over.
I then change my mount point to /data, and now the app starts right where it left off when pod restarts. Coupled with external mysql server, and it so far it looks like it's stateless.