[Udpcast] what is the speed bottleneck?

Alain Knaff alain at knaff.lu
Thu Sep 23 09:57:41 CEST 2004


On Thursday 23 September 2004 09:31, Ramon Bastiaans wrote:
> Hi all,
>
> I was wondering if anyone knows or could tell what the bottleneck is for
> udpcast multicast speeds?
>
> We have been using udpcast satisfactory for a while now in combination
> with systemimager. In our previous situation the maximum speed seemed to
> be around 20 Mbps (MAX_BITRATE). Setting the speed any higher would
> result in dropped slices in the cast and some receivers not getting the
> image in the first cast. This was on a 100 Mbps network.

Wow, that's slow. In comparison, Udpcast can almost saturate a 100
Mbps network (80 Mbps or beyond are easily achievable). Often, the
bottleneck is not the network, but the local hard disk, especially
when operating in compressed mode.

> We now have a new setup where we image machines over a 1000 Mbps
> network, and the maximum speed seems to be 40 Mbps for udpcast? If we
> cast any faster the number of machines failing to successfully receive
> the cast dramatically increases (even though in the first tests, even at
> 40 Mbps there are 3 out of 40 still dropping).

I've no personal experience with 1000 Mbps networks, but from reports
I got it seems that:
 * it cannot saturate the network
 * but 500 Mbps is achievable
 * paradoxically, "optimum" speed is achieved by limiting the bitrate (!).
 i.e. add --max-bitrate 600m

The reason for this seems to be that if data is sent too fast, 1000
Mbps equipment randomly drops frames, rather than using flow control,
which seems to affect speed in a much worse way than if the bitrate is
already limited at the source. Fortunately, no such phenomenon exists
on most 100 Mbps switches.

To find the optimal limit on a 1000 Mbps network, start with 400m, and
then increase the limit (by increments of 100m for instance), until
the speed you get out of it no longer raises (or even falls).

> Now this is on machines with 15.000 rpm scsi disk's. The hd and network
> both should be able to go as fast as 100 MByte/sec in theory. However we
> can't even get a stable cast at a 10th of this speed.

A tenth of the speed? This is very suspicious, as it puts the speed at
80 Mbps, i.e. a speed you could get on a 100 Mbps network. Make sure
that there are no 100 Mbps devices directly or indirectly connected to
the switch (even if they are not actually participating in the cast).
Because if flow control _is_ enabled, the switch might slow down
communication so that even these "slow" ports can follow. Yes, if IGMP
snooping is enabled this should theoretically not be an issue, but
there are a scary number of switches out there with buggy or missing
IGMP support (i.e. even if a menu item "IGMP snooping" is present in
the switch's management interface, it might be a no-op!). To find out
whether this is the case, try disconnecting any equipment from the
switch which is not directly participating in the multicast. If this
is not feasible, try to swithch off flow control just on the "slow"
ports (many switches support a per-port flow control setting).

> It's not a very
> big issue since we save a lot of time by imaging all machines at the
> same time, but I was still wondering what exactly the bottleneck for the
> multicast's speed is.

If the issue is not 100 Mbps devices slowing down the transfer, try
adding --max-bitrate 600m to the sender, and it should work.

> Is it the udp protocol, or the multicast technique, or could it still be
> a hardware issue?

[if the issue is not the 100 Mbps device issue], probably some kind of
hardware issue, but I couldn't yet exactly pinpoint what the issue
is. As said, the problem mostly seems to arise on Gbps networks. On
100 Mbps networks, udpcast can saturate the network just fine, without
any special tweaks.

[the "slow device" issue, OTOH, also happens on 100 Mbps networks,
mostly in connection with network printers that only support 10 Mbps]

> Any opinions on the subject are appreciated, perhaps some of the authors
> of udpcast could give some insight?
>
> Kind regards,
>

Regards,

Alain




More information about the Udpcast mailing list