I have enabled the privileged mode in the container and add a rule to it,
iptables -N udp2rawDwrW_191630ce_C0
iptables -F udp2rawDwrW_191630ce_C0
iptables -I udp2rawDwrW_191630ce_C0 -j DROP
iptables -I INPUT -p tcp -m tcp --dport 4096 -j udp2rawDwrW_191630ce_C0
and kt exec
into the container and use iptables --table filter -L
, I can see the added rules.
/ # iptables --table filter -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
udp2rawDwrW_191630ce_C0 tcp -- anywhere anywhere tcp dpt:4096
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain udp2rawDwrW_191630ce_C0 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
While when I logged into the node where the container lives, and run sudo iptalbes --table filter -L
, I cannot see the same result.
I was thinking by default the previleged
is dropped because the container might leverage it to change something like the iptables in the node, however it looks not like that.
So my question is "what is the relationship between K8S iptables and the one of a container inside a pod" and "why we stop user from modifying the container's iptables without the privileged
field"?
if you want to manipulate node's iptables then you definitely need to put the pod on host's network (hostNetwork: true
within pod's spec
). After that granting to the container NET_ADMIN
and NET_RAW
capabilities (in containers[i].securityContext.capabilities.add
) is sufficient.
example json slice:
"spec": {
"hostNetwork": true,
"containers": [{
"name": "netadmin",
"securityContext": {"capabilities": { "add": ["NET_ADMIN", "NET_RAW"] } }
I'm not sure if privileged mode has anything to do with manipulating host's iptables these days.