General discussion

Locked

Solaris Timeouts on large file transfers

By cp409sd ·
I seem to be having a problem with a Solaris 8 machine dropping a connection before a large file transfer (about 10 gig) is complete. Where would I look for timeout settings regarding telnet and ftp sessions? I am not certain that Solaris is to blame, but I am going to take a look at this first. Thanks.

This conversation is currently closed to new comments.

2 total posts (Page 1 of 1)  
| Thread display: Collapse - | Expand +

All Comments

Collapse -

by cpfeiffe In reply to Solaris Timeouts on large ...

I've experienced these problems before too. It wasn't a timeout setting in my case. In my case it was the fact that the server and the switch were both set to auto-negotiate and didn't negotiate to the same speed. After forcing both to 100/FD the problem went away. You might want to look into that. However, most timeout settings are declared in the network portion of the kernel configuration. You can use 'ndd' to view and set these. There are a lot of options with 'ndd' (too many to list here) so you'll just need to do a 'ndd /?' to see everything. You can put 'ndd' in a google search or use other methods to find out more about it. The man page isn't all that inclusive but is a good start.

Collapse -

by Nico Baggus In reply to Solaris Timeouts on large ...

My Assumption is you use FTP as your protocol.

The first thing to look for is you network
topology.
What parts are involved in the transfer.
1 Your Sun
2 Switch/Hub....
...
? (Firewall?/Nat router?)

N-1 Remote Switch/Hub
N Remote System

You need to check you local topology. (Is you
switch matched to your system, Same speed, Same
Fullor Half Duplex, when in doubt set fixed speed
& dulex).
Check that for the whole path.... switch ->
switch etc.
(Hub is allways 10 or 100Mbps/Half duplex, on a
switch YMMV).

Then the firewall or nat router.
Telnet won't be a problem there as long as there
is trafic,
But FTP is a different bird, it uses a link on
port 21 for commands and a link on port 20 for
data delivery, with large files the link on port
20 is busy, but on port 21 nothing happens.
A NAT device or Firewall will drop the knowlegde
of such a setting when a certain time without
trafic expires. (30 minutes on Checkpoint AFAIK)
Also here YMMV.
The solution might be to try to use SCP (uses a
single link) or a different protocol that uses
a single link, or build a special ftp client that
set the command link socket to SO_KEEPALIVE
to ensure artificial traffic on the command link.

Success,
Nico

Back to Linux Forum
2 total posts (Page 1 of 1)  

Related Discussions

Related Forums