linux networking – Why are “Relayed” multicast packets are not received?

I wrote a test program to diagnose multicast routing problems.
The program has several modes of operation:

  • sender (send out a number of multicast packets)
  • receiver (receive a number of multicast packets)
  • requester (send a multicast packet, then time-wait for a response, repeat a number of times)
  • responder (receive a multicast packet, then send a response, repeat a number of times)
  • relay (like responder, but don’t respond to the sending address, but to the multicast address)

The “relay” mode was added most recently, and all the other modes work as expected, but “relay” does not (even though doing more or less the same as the other modes do):
The relay only receives it’s own responses, but the requester does not receive any response.

I compared a combination of (requester, responder) with (requester, relay) on the same host:

Requester 1

~/src/C/multicast> ./mc-tester -l 224.7.7.7/123 -d 224.7.7.7/1234 -m requester -c100 -v1
(1) verbosity = 1
(1) Sending 100 requests on 3...
(1) Sending "224.7.7.7/1234: v04 request #1/100" size 39 to 224.7.7.7/1234 on 3
(1) Receiving message #1 on 3...
(1) v04 received #1/1 from 172.20.16.35/60248 (TTL -1): "172.20.16.35/35949 v04: response #1/10 for #1"
(1) Sending "224.7.7.7/1234: v04 request #2/100" size 39 to 224.7.7.7/1234 on 3
(1) Receiving message #1 on 3...
(1) v04 received #1/2 from 172.20.16.35/60248 (TTL -1): "172.20.16.35/35949 v04: response #2/10 for #2"
(1) Sending "224.7.7.7/1234: v04 request #3/100" size 39 to 224.7.7.7/1234 on 3
(1) Receiving message #1 on 3...
(1) v04 received #1/3 from 172.20.16.35/60248 (TTL -1): "172.20.16.35/35949 v04: response #3/10 for #3"
^C

(“TTL -1) means the received TTL is unknown)

Responder 1

~windl/src/C/multicast/mc-tester -v3 -l 224.7.7.7/1234 -m responder -d 224.7.7.7/1234 -c10
(1) verbosity = 3
(2) /home/windl/src/C/multicast/mc-tester: 224.7.7.7/1234 -> 224.7.7.7/1234 (16)
(1) op_mode = 3
(2) /home/windl/src/C/multicast/mc-tester: 224.7.7.7/1234 -> 224.7.7.7/1234 (16)
(1) msg_count = 10
(2) socket(PF_INET, SOCK_DGRAM, 0)...
(2) setsockopt(3, SO_REUSEADDR, 1)...
(2) socket(PF_INET, SOCK_DGRAM, 0)...
(2) setsockopt(4, SO_REUSEADDR, 0)...
(2) bind(3, 224.7.7.7/1234)...
(2) recv_socket: getsockname(3) returned 224.7.7.7/1234
(2) setsockopt(3, IP_MULTICAST_LOOP, 0)...
(2) setsockopt(3, IP_RECVTTL, 1)...
(2) setsockopt(3, IP_ADD_MEMBERSHIP, 224.7.7.7/1234)...
(2) setsockopt(4, IP_MULTICAST_TTL, 3)...
(1) Receiving 10 messages on 3...
(1) v04 received #1/10 from 172.20.16.35/35949 (TTL 3): "224.7.7.7/1234: v04 request #1/100"
(1) Sending "172.20.16.35/35949 v04: response #1/10 for #1" size 50 to 172.20.16.35/35949 on 4
(1) v04 received #2/10 from 172.20.16.35/35949 (TTL 3): "224.7.7.7/1234: v04 request #2/100"
(1) Sending "172.20.16.35/35949 v04: response #2/10 for #2" size 50 to 172.20.16.35/35949 on 4
(1) v04 received #3/10 from 172.20.16.35/35949 (TTL 3): "224.7.7.7/1234: v04 request #3/100"
(1) Sending "172.20.16.35/35949 v04: response #3/10 for #3" size 50 to 172.20.16.35/35949 on 4
^C

So that combination worked as expected.
Now the combination that did not:

Requester 2

/src/C/multicast> ./mc-tester -l 224.7.7.7/123 -d 224.7.7.7/1234 -m requester -c100 -v1
(1) verbosity = 1
(1) Sending 100 requests on 3...
(1) Sending "224.7.7.7/1234: v04 request #1/100" size 39 to 224.7.7.7/1234 on 3
select timed out
(1) Sending "224.7.7.7/1234: v04 request #2/100" size 39 to 224.7.7.7/1234 on 3
select timed out
(1) Sending "224.7.7.7/1234: v04 request #3/100" size 39 to 224.7.7.7/1234 on 3
^C

(“select timed out” refers to receiving, not sending)

Relay

~windl/src/C/multicast/mc-tester -v3 -l 224.7.7.7/1234 -m relay -d 224.7.7.7/1234 -c10
(1) verbosity = 3
(2) /home/windl/src/C/multicast/mc-tester: 224.7.7.7/1234 -> 224.7.7.7/1234 (16)
(1) op_mode = 4
(2) /home/windl/src/C/multicast/mc-tester: 224.7.7.7/1234 -> 224.7.7.7/1234 (16)
(1) msg_count = 10
(2) socket(PF_INET, SOCK_DGRAM, 0)...
(2) setsockopt(3, SO_REUSEADDR, 0)...
(2) socket(PF_INET, SOCK_DGRAM, 0)...
(2) setsockopt(4, SO_REUSEADDR, 1)...
(2) bind(3, 224.7.7.7/1234)...
(2) recv_socket: getsockname(3) returned 224.7.7.7/1234
(2) setsockopt(3, IP_MULTICAST_LOOP, 0)...
(2) setsockopt(3, IP_RECVTTL, 1)...
(2) setsockopt(3, IP_ADD_MEMBERSHIP, 224.7.7.7/1234)...
(2) setsockopt(4, IP_MULTICAST_TTL, 3)...
(2) setsockopt(4, IP_ADD_MEMBERSHIP, 224.7.7.7/1234)...
(1) Relaying 10 messages on 3...
(1) v04 received #1/10 from 172.20.16.35/33488 (TTL 3): "224.7.7.7/1234: v04 request #1/100"
(1) Sending "224.7.7.7/1234 v04: relay #1/10 for #1" size 43 to 224.7.7.7/1234 on 4
(1) v04 received #2/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #1/10 for #1"
(1) Sending "224.7.7.7/1234 v04: relay #2/10 for #1" size 43 to 224.7.7.7/1234 on 4
(1) v04 received #3/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #2/10 for #1"
(1) Sending "224.7.7.7/1234 v04: relay #3/10 for #2" size 43 to 224.7.7.7/1234 on 4
(1) v04 received #4/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #3/10 for #2"
(1) Sending "224.7.7.7/1234 v04: relay #4/10 for #3" size 43 to 224.7.7.7/1234 on 4

(1) v04 received #9/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #8/10 for #7"
(1) Sending "224.7.7.7/1234 v04: relay #9/10 for #8" size 43 to 224.7.7.7/1234 on 4
(1) v04 received #10/10 from 172.20.16.35/44217 (TTL 3): "224.7.7.7/1234 v04: relay #9/10 for #8"
(1) Sending "224.7.7.7/1234 v04: relay #10/10 for #9" size 44 to 224.7.7.7/1234 on 4
(1) Received 10 messages
(2) setsockopt(3, IP_DROP_MEMBERSHIP)...
(2) setsockopt(4, IP_DROP_MEMBERSHIP)...
(2) close(4)...
(2) close(3)...

So the messages in the relay are looping locally.
The test was done with Linux (SLES12 SP4).

I decided not to lengthen the question with the C source of the program, but when requested I can present the relevant parts or an ltrace/strace of the relay.

linux – router using smcroute stops routing multicast after some time

I have a problem with multicast routing on a “from scratch” (Debian, not LFS ;)) linux router/firewall. At home I’ve set up a user and an server net. Between the two is a router/firewall (APU2E4 board) which, among other things, should route media data (UPnP) from the media server (container on a Proxmox VE server) to the clients. I use smcroute for that. For months that has been going fine – and from one moment to the other it stops working (haven’t change a bit or the configuration). (I had similar problems before two or three times but could “resolve” them but just restarting the firewall.)

A while after starting smcroute tcpdump doesn’t detect any of the expected multicast packets anymore on the firewall (tcpdump -i enp1s0 host 239.255.255.250 and port 1900).
Up to that point the packets do come in on that interface. (I’ve
also checked the other interfaces but there are also no such packets –
UPnP – to be found. But that’s the expected behaviour since the clients
should and can only connect to enp1s0.)

TTL of the packets in question is over 1 (checked on the client host). I
even let iptables on the router increment the TTL if such a packet has a
value of 1.

When there seem to be no packets coming in on enp1s0 anymore ip maddress
still does list that interface as a member of the group:

2: enp1s0
link 33:33:00:00:00:01
link 01:00:5e:00:00:01
link 33:33:ff:54:2b:98
link 01:00:5e:7f:ff:fa
inet 239.255.255.250
inet 224.0.0.1
inet6 ff02::1:ff54:2b98 users 2
inet6 ff02::1
inet6 ff01::1

But ip mroute doesn’t list the routes anymore (only on the side of
enp1s0 as the incoming interface apparently):

Before:

root@gw-srv:~# ip mroute
(10.0.0.20,239.255.255.250) Iif: enp2s0 Oifs: enp1s0 State:
resolved
(192.168.0.20,239.255.255.250) Iif: enp1s0 Oifs: enp2s0 State:
resolved
(192.168.0.2,239.255.255.250) Iif: enp1s0 Oifs: enp2s0 State:
resolved
(192.168.0.1,239.255.255.250) Iif: enp1s0 Oifs: enp2s0 State:
resolved

After:

root@gw-srv:~# ip mroute
(10.0.0.20,239.255.255.250) Iif: enp2s0 Oifs: enp1s0 State:
resolved

Although when I specify a source address like 192.168.0.2 in
smcroute.conf the corresponding route doesn’t “disappeare” (10.0.0.20 is
the media server on the other subnet). The problem still resides, however.

I also tested with iperf on the router whether multicast generally works
between the two hosts:

Router: iperf -s -u -B 239.255.255.250 -i 1

Client: iperf -c 239.255.255.250 -u -T 32 -t 3 -i 1

When smcroute was just started the server receives the packets. Some
minutes later nothing comes in (until I restart smcroute or manually leave the multicast group). When I use
any other multicast address at that point (e.g. 239.255.255.249) the
packets do get to the firewall (iperf server). And when smcroute isn’t
running at all the problem doesn’t occur (with 239.255.255.250).

All in all it seems to me that when a multicast route has been
established some minutes later the router isn’t able to receive
multicast traffic to the corresponding address at all anymore.

Can you make sense out of this? So far I could’t find a hint regarding
such a problem.

Many thanks for suggestions.

router – Source of multicast broadcast to WAN port

I have noticed multicast broadcasts from 192.168.30.2 hitting my router’s WAN interface. That would be sourced from an interface that’s somewhere locally on my router, right, or where would it be coming from on the ISP’s 10.46.xx.xx network? Thanks!

Here’s the firewall log. The first item’s source IP is within /24 of my WAN’s gateway IP. The second item’s source IP’s subnet seems like a local private IP but where would it be? It seems maybe like a handshake over the Remote Replication Agent Connection port.

Thu Jun 25 02:34:12 2020 kern.warn kernel: [ 8257.329619] DROP wan in: IN=eth0.2 OUT= MAC=ff:ff:ff:ff:ff:ff:64:d1:54:84:83:02:08:00:45:00:00:80 SRC=10.46.32.37 DST=255.255.255.255 LEN=128 TOS=0x00 PREC=0x00 TTL=64 ID=0 PROTO=UDP SPT=5678 DPT=5678 LEN=108

Thu Jun 25 02:34:12 2020 kern.warn kernel: [ 8257.440217] DROP wan in: IN=eth0.2 OUT= MAC=ff:ff:ff:ff:ff:ff:d4:ca:6d:0b:9b:05:08:00:45:00:00:8b SRC=192.168.30.2 DST=255.255.255.255 LEN=139 TOS=0x00 PREC=0x00 TTL=64 ID=0 PROTO=UDP SPT=5678 DPT=5678 LEN=119

streaming – Pricing when using CDN multicast

We’re looking into using a CDN for media streaming in m2tp format.
Most of the larger providers offer multicast solutions, which would be ideal.

Whether it’s the most cost effect solution however, depends on the pricing structures which are not usually publicly available.

Is multicast data usually billed the same as unicast data?
(In other words, by the amount of data received rather than sent)

Is this something that is typically negotiated with a CDN?

networking – Ubuntu transmits TTL 0 multicast packets

IP packets with TTL 0 shall not leave host.

But when I start application which multicasts UDP packets with TTL 0, I see packets with TTL 0 leaving host for few seconds, and coming to normal behavior of TTL 0. This most likely happens after reboot and first start of application.

I confirmed packets with TTL 0 leaving host with tcpdump:

05:31:39.048304 IP (tos 0x0, id 14487, offset 0, flags (DF), proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.049594 IP (tos 0x0, id 14488, offset 0, flags (DF), proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.051601 IP (tos 0x0, id 14489, offset 0, flags (DF), proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316
05:31:39.053584 IP (tos 0x0, id 14490, offset 0, flags (DF), proto UDP (17), length 1344)
    192.168.1.200.46968 > 239.0.0.1.1234: UDP, length 1316

As we can see ttl is not displayed which means TTL 0, as confirmed from tcpdump man page: https://www.tcpdump.org/manpages/tcpdump.1.html (search ttl, it clearly indicated: ttl is the time-to-live; it is not reported if it is zero).

There are no any iptables rules running.

uname -a: Linux mydevice 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

lsb_release -a:

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:        18.04
Codename:       bionic

What can be the cause for this behavior, and how can I resolve this?

Unable to create QEMU socket network in Windows host using multicast fails with unknown error

I am trying to create a simulated VLAN using socket networking, and the only way to connect multiple networks in QEMU using socket networking is to 39; use multicast mcast option of socket network backend.

However, when I try to use the following arguments in QEMU to create a multicast socket network:

-device e1000,netdev=sock-0 -netdev id=sock-0,mcast=230.0.0.1:1234

it fails with:

can't bind ip=230.0.0.1 to socket: Unknown error in my Windows host.

Is this a QEMU bug or is it missing a prerequisite before running the QEMU command (for example, waiting for a multicast listener to run, etc. .)?

By the way, I am using Windows 10 and I am running a cross compiled QEMU 4.2.0. I printed the error just before the link failed net/socket.c QEMU source code on line 256, and WSAGetLastError Return WSAEADDRNOTAVAIL.

amazon web services – Why don't GCP, AWS, or Azure support IGMP multicast / broadcast?

It is well known that GCP / AWS / Azure does not support multicasting or IGMP broadcasting. Some claim that it is because of security concerns, but do not mention what those concerns are.

Is there a reason why these cloud providers don't support such long-standing and well-specified routing paradigms?

nat – Use interface IP address to respond to incoming multicast packet

I have a multicast routing configuration with forwarding on the receiving side, as follows (all Linux):

+----------------+            +----------------+                  +-------------+
| openvpn-server |tun0    tun0| openvpn-client |  foward port 53  | application |
|    10.8.0.1    |============|    10.8.0.2    |------------------| 172.16.3.3  |
+----------------+            +----------------+                  +-------------+
                               joined 239.1.2.3
                               multicast group

In this configuration, the openvpn-server side sends UDP packets to multicast group 239.1.2.3 on port 53. Specifically, the packets are DNS NOTIFY messages, but I don't think that is relevant here. (There are several cases of openvpn-client that's why multicasting is used.)

openvpn-client then forwards traffic to application. This host acknowledges receipt of the packet by responding with another UDP packet.

The response packet is returned to openvpn-client or Linux converts source IP to recipient of original packet (assuming that he will be the sender of the response), i.e. 239.1.2.3. This is the problem: Due to this source IP address, the packet is not forwarded to the original sender of the first packet and the sender thinks that the packet was not forwarded. This results in several unnecessary attempts and a lot of logging.

the question is it is possible to educate openvpn-client at rewrite the source address of the response to 10.8.0.2 instead of. I believe that if this were the case, the response package would be delivered. Is it possible?

I have observed that when I ping from 10.8.0.1 to 239.1.2.3, the echo packet comes from 10.8.0.2 (and not from 239.1.2.3). (Note that the ping case does not involve port forwarding.) How can I get the same behavior for my UDP case?

routing – Configure a multicast route on an intermediate hop

I have a host with two Docker containers (with NET_ADMIN aptitude):

  • backend with an interface eth0 (172.16.7.3)
  • openvpn-server with interfaces eth0 (172.16.7.2) and tun0 (10.8.0.1), running an OpenVPN server (tun mode)

There is an OpenVPN client on another machine openvpn-client with interface tun0 (10.8.0.2). The VPN works.

Additional route configuration:

  • backend has routes 10.8.0.0/24 via 172.16.7.2 and 224.0.0.0/4 via eth0.
  • openvpn-server has routes 10.8.0.0/24 dev tun0 and 224.0.0.0/4 dev tun0.

backend can ping successfully openvpn-client (routed by openvpn-server): ping 10.8.0.2 works like a charm.

observations:

When i run ping -t3 225.1.2.3 sure openvpn-server, these go through the VPN tunnel, and I can see the ICMP packets arriving on openvpn-client (with tcpdump -i tun0 net 224.0.0.0/4 sure openvpn-client).

Also, when I run ping -t3 225.1.2.3 sure backend, those who go out through this host eth0 and enter openvpn-serverc & # 39; eth0. I can see them on openvpn-server using tcpdump -i eth0 net 224.0.0.0/4.

Problem:

I wish I could run ping -t3 225.1.2.3 sure backend and have the pings sent to openvpn-client, as if 10.8.0.2 had been nuts. (The end goal is to multicast UDP packets from backend to all VPN clients.)

My attempt:

smcroute -d -n -j eth0 225.1.2.3 -a eth0 172.16.7.3 225.1.2.3 tun0

I thought it would establish the multicast route, but in reality it doesn't matter. I can't see outgoing ICMP packets on openvpn-serverc & # 39; tun0. – What's wrong?


I also tried to set up pimd on any two pairs of the three hosts, as well as on the three. As a result, I could make a iperf reference (as suggested here) between backend and openvpn-server, and also between openvpn-server and openvpn-client, but not between backend and openvpn-client. It seems that the transfer / routing through the jump in the middle does not work. (I had set the TTL to 5, so that shouldn't be the problem.)

I am happy to provide more details if necessary (such as ip route list exit), but did not want to clutter the issue unnecessarily.

c – What method do you suggest for reading a multicast stream under Linux?

I wrote a program in Linux using C / C ++ which reads multicast packets and tries to figure out whether a specific event has occurred or not as quickly as possible. Latency is the key point here.

In the protocol, the first two bytes represent the type of message.
In my current implementation, I have read the first two bytes and I decide how many bytes to read for the payload depending on the type of message. Namely, I execute 2 read operations for 1 packet. One of the read operations concerns the length of the packet and the other the payload. So there are 2 I / O operations.

Alternatively, I could do this, I read as much as I can, check the first 2 bytes, let's say it's N, go for N bytes and form package1 and package2. If there are bytes remaining after packets 1 and 2 are formed, read more bytes and process the byte buffer again as above. In this method, it is necessary to iterate over the byte buffer.

Which is faster theoretically? I know I need to implement and measure both, but I just wanted to hear your suggestions.

Thank you