Search code examples
ubuntukubernetesnfs

0.0.0.0/0 is not feasible for nfs configuration


I'm new to NFS(network file system). I was trying to create my own nfs system inside k8s cluster. FYI, below is my ip settings.

# k8s cluster ip settings
master1 ansible_host=10.1.3.245 ip=10.1.3.245
node1 ansible_host=10.1.3.58 ip=10.1.3.58
node2 ansible_host=10.1.3.191 ip=10.1.3.191
node3 ansible_host=10.1.3.88 ip=10.1.3.88
node4 ansible_host=10.1.3.74 ip=10.1.3.74
node5 ansible_host=10.1.3.228 ip=10.1.3.228

All nodes are ubuntu18.04 and I run nfs server on node1 (10.1.3.58). Below is /etc/hosts file on node1.

# /etc/hosts
127.0.0.1 localhost localhost.localdomain

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback localhost6 localhost6.localdomain
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
# Ansible inventory hosts BEGIN
10.1.3.58 node1.cluster.local node1
10.1.3.191 node2.cluster.local node2
10.1.3.88 node3.cluster.local node3
10.1.3.74 node4.cluster.local node4
10.1.3.228 node5.cluster.local node5
10.1.3.245 master1.cluster.local master1
# Ansible inventory hosts END

To serve nfs server, I edited /etc/exports file. According to my understanding, /etc/exports file has following format in each line: <path> <allowed_ips>(options).

For example /mnt/node1nfsstorage 0.0.0.0/0(rw,sync,no_subtree_check,insecure) means allow all access to nfs dir /mnt/node1nfsstorage from everywhere.

When I use above config, I cannot access to nfs server (node1) from master1. (I opened 2049 port which is default port of nfs!). FYI here is the command that I used.

# from master1
ubuntu@master1:~$ sudo mount 10.1.3.58:/mnt/node1nfsstorage /home/ubuntu/mount
mount.nfs: access denied by server while mounting 10.1.3.58:/mnt/node1nfsstorage

# from /var/log/syslog from node1
ubuntu@node1:/mnt$ tail -f /var/log/syslog | grep nfs
Jan 19 06:23:50 node1 kernel: [190747.809254] nfsd_dispatch: vers 4 proc 0  # I also cannot understand this log message

But when I changed config to /mnt/node1nfsstorage *(rw,sync,no_subtree_check,insecure), finally it works.

I think * is related to wildcard for domain names and 0.0.0.0/0 represents ip range. Why only * works for my situation? Can anybody help me to understand this? After some tests, I found that several ip or ip ranges do not work too, e.g, 0.0.0.0, 10.1.3.*, 10.1.0.0/16


Solution

  • In the /etc/exports file we can configure our NFS server's shares. Typical entry has the following structure:

    export_directory host_designation(options)
    

    where:
    export_directory - NFS share being exported
    host_designation - one or multiple hosts or networks that have access to this export

    host_designation (NFS clients) may be specified in a number of ways:

    • single host
    • netgroups
    • multiple systems / wildcards
    • IP networks

    In your case you are using wildcards method that shouldn't be used with IP addresses because it may cause unexpected behaviours (I don't recommend to use e.g 10.1.3.*). The * or ? wildcards can be used with FQDN or hostname.

    In my opinion 0.0.0.0/0 isn't valid syntax for the NFS. If you need to export share to everyone you can use * or even leave host_designation blank e.g.:

    /mnt/node1nfsstorage  (rw,sync,no_subtree_check,insecure)
    

    You can find more information and examples here.