Search code examples
javanetwork-programmingmulticastfirewalldpodman

Multicast packet not arrived inside podman correctly. Workaround found, but unclear if it is a firewalld issue or a podman issue?


I am becoming crazy with the firewalld, podman and UDP/Multicast. While I see UDP packets arriving in podman; confirmed using tcpdump command. It seems I am unable to configure using a customized firewalld zone with name knx_multicast that should accept only when UDP packet is from multicast group 224.0.23.12:3671.

Given minimal example, written in Java:

import java.net.DatagramPacket;
import java.net.InetAddress;
import java.net.MulticastSocket;
import java.net.NetworkInterface;

public class Test {
    public static void main(String[] args) throws Throwable {
        final var group = InetAddress.getByName("224.0.23.12");
        final var s = new MulticastSocket(3671);

        final var ni = NetworkInterface.getByName("enp1s0");
        s.setNetworkInterface(ni);
        s.joinGroup(group);

        System.out.println("Start listening ... @" + ni );

        final var buf = new byte[1000];
        DatagramPacket recv = new DatagramPacket(buf, buf.length);
        s.receive(recv);

        System.out.println(recv.getData());

        s.leaveGroup(group);
        s.close();
    }

}

I have the firewalld configured as:

knx_multicast (active)
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 224.0.23.12
  services: 
  ports: 3671/udp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 


public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

Testing multicast packet on CentOS 8.1

Now I tested first in CentOS 8.1 running and it works as the I get some data (see: [B@61a52fbd below)

[root@PIT-Server ~]# javac Test.java && java Test
Start listening ... @name:enp1s0 (enp1s0)
[B@61a52fbd

Testing multicast packet using PODMAN on CentOS 8.1

Next step is now to test within a podman container (image: 'adoptopenjdk/openjdk11:latest' which is running on "Ubuntu 18.04.3 LTS") using: podman run --rm -it --net host docker.io/adoptopenjdk/openjdk11 /bin/bash

Inside the podman I also see the UDP packets arriving from the PIT-KNX (a KNX router).

root@PIT-Server:/tcpdump -i enp1s0 udp port 3671
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp1s0, link-type EN10MB (Ethernet), capture size 262144 bytes
19:49:35.583901 IP PIT-KNX.pit-router.3671 > 224.0.23.12.3671: UDP, length 17
19:49:36.032139 IP PIT-KNX.pit-router.3671 > 224.0.23.12.3671: UDP, length 18
... lines omitted ...

Starting the same java application (which was working outside of container environment) I am unable to get any data (no byte array arrived after "Start listening")

root@PIT-Server:/# javac Test.java && java Test
Start listening ... @name:enp1s0 (enp1s0)

Workaround (firewalld)

After investigating several hours/coffees I figured out that allowing port in the zone=knx_multicast is not enough. I have to add the port to zone=public too, using: firewall-cmd --add-port=3671/udp. The config of firewalld is now:

knx_multicast (active)
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 224.0.23.12
  services: 
  ports: 3671/udp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 


public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 3671/udp    <== ADDED!!!! (that one fixes the problem)
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

Re-Testing multicast packet using PODMAN on CentOS 8.1

By re-running the java application I am now able to see the arriving UDP multicast packet (see: [B@61a52fbd below)

root@PIT-Server:/# javac Test.java && java Test
Start listening ... @name:enp1s0 (enp1s0)
[B@61a52fbd

My questions ... what happened? Next steps?

Can anyone help me to understand what exactly the issue is? Why do I have to add port to zone=public too? Is this a bug or a configuration issue on my side? How can I resolve it without adding the port to the zone=public? Do I have a misunderstanding?

I would have been more comfortable by adding a new firewalld zone (called knx_multicast) only; and keep the configuration of firewalld public zone untouched. Suggestions?

Thank you, Christoph


Solution

  • Thanks to @Ron Maupin for pointing the issue. My firewall configuration was wrong.

    The issue has been resolved by creating a new service:

    firewall-cmd --permanent --new-service=knx
    firewall-cmd --permanent --service=knx --set-description="KNXnet/IP is a part of KNX standard for transmission of KNX telegrams via Ethernet"
    firewall-cmd --permanent --service=knx --set-short=KNX
    firewall-cmd --permanent --service=knx --add-port=3671/udp
    

    To make able to add the newly created service, reload the firewalld and add it

    firewall-cmd --reload
    firewall-cmd --permanent --add-service=knx
    

    This will create a service file: /etc/firewalld/services/knx.xml with following content:

    <?xml version="1.0" encoding="utf-8"?>
    <service>
      <short>KNX</short>
      <description>KNXnet/IP is a part of KNX standard for transmission of KNX telegrams via Ethernet</description>
      <port port="3671" protocol="udp"/>
    </service>
    

    And the firewall config will look like:

    public (active)
      target: default
      icmp-block-inversion: no
      interfaces: enp1s0
      sources: 
      services: cockpit dhcpv6-client knx ssh
      ports: 
      protocols: 
      masquerade: no
      forward-ports: 
      source-ports: 
      icmp-blocks: 
      rich rules: