[Udpcast] Reliable multifile transfer over slow links, downed hosts
chris.curtis at riptidesoftware.com
Sat Oct 18 00:39:11 CEST 2003
Alain Knaff wrote:
> On Friday 17 October 2003 02:37, Christopher Curtis wrote:
>>scenario is basically this: I need to reliably transfer large files
>>(large as in >2GB) to multiple hosts over a slow WAN.
>>What I would ideally want is something like the following:
>> 'N' clients, each running a daemon (or inetd...)
>> Server Process connects via TCP to each client
>> Server tells client "Listen on Multicast port <foo>"
>> "I am going to send you XXXX bytes"
>> "Save this in a file called <bar>"
> You could set up something like this with ssh starting udp-receivers
> at the various "clients":
> ssh udp-receiver -p 9002 -f bar ... >stdout.log 2>stderr.log
I didn't understand this response at all, so I reread the documentation
and dug around the sources some. I am correct in stating that udpcast
does not guarantee delivery by requesting resends for missed packets
through the return communication channel, yes? Your comment made about
not binding to the multicast address would indicate further problems
with guaranteed delivery.
I decided that it was harder to find an answer than to make one myself,
so I wrote a couple basic perl scripts to implement the functionality I
described above (except for the listen on address/port line). I'll be
performing regular updates of the system, so it has to be something
And to clear up my previous message, the scenario is that there is one
central server regularly distributing data to single clients at multiple
remote locations or to multiple clients at multiple remote locations.
I'm having (what I hope are) some network problems; the code runs
beautifully on my local machine. However, I'm still looking for
something a bit more off-the-shelf, if anyone has any pointers...
More information about the Udpcast