* Remove unused hstrerror(), bad nanosleep() message in configure.ac (#503)
* Remove dead code involving h_errno and hstrerror()
h_errno was formerly set as a side effect of a failed
gethostbyname(3) call, but this function has been
deprecated.
This change is preparatory to removing known issues from the
README file. In place of duplicate text, we'll put a pointer to a
single SOT for this information.
This fixes a problem observed on FreeBSD and macOS where the MTU on
the loopback interface is larger than the default socket buffer size.
We adjusted the socket buffer size upwards to match the UDP payload
size, but that's apparently not enough and we ended up dropping packets.
This is bad. Add a 1KB fudge factor, which seesm to avoid this problem.
Affects UDP tests only, not TCP or SCTP.
Part of #496.
(cherry picked from commit d76198944d210e8a575747d3ddbee41a886a10c9)
Signed-off-by: Bruce A. Mah <bmah@es.net>
* Dynamically determine an appropriate default UDP send size.
We use the TCP MSS for the control connection as the default UDP
sending length, if the --length parameter is not specified for a
UDP test. This computation replaces the former hard-coded 8K
default, which was way too large for non-jumbo-frame Ethernet
networks.
The concept for this solution was adapted from nuttcp. The
iperf3 implementation is pretty easy since we already were
getting the MSS for the control connection anyway (although we
needed to get it slightly earlier in the setup process to be
useful).
Towards issue #496.
While here, s/int/socklen_t/ in one place to fix a compile warning,
and bump a few copyright dates.
* Warn if doing a UDP test and the socket buffer isn't big enough.
This is surprisingly an issue on FreeBSD and macOS, where the MTU
over the loopback interface is actually larger than the default
UDP socket buffer size. In these cases, doing a UDP test over the
loopback interface (with the new UDP defaults) will fail unless a
smaller --length or a larger --window size is set explicitly.
Linux has larger UDP socket buffers by default (much larger than the
largest possible MTU), but even in the case that the socket buffers
are too small to hold an MTU-sized send, the kernel seems to do the
send correctly anyway.
Still working towards a good solution for issue #496.
* Further refinement on UDP buffer size settings.
If the default buffer size on a UDP test can't hold a packet,
then increase the buffer size to be large enough to hold one
packet payload. (If the buffer size was explicitly set, but too
small to hold a packet payload, then warn but don't change the
buffer size.)
Minor code refactoring to...factor out some common code into
a new iperf_udp_buffercheck() function.
Still working towards issue #496.
* First try to fix pacing issues. Code compiles, lightly run-tested.
Make --bandwidth only control application-level pacing, refecting
behavior of iperf 3.1.2 and earlier.
Add a new --fq-rate that controls only FQ-based per-socket pacing.
A given test can use application-level pacing, FQ pacing, both,
or neither.
Deprecate the --no-fq-socket-pacing option; specifying this generates
a warning and is equivalent to --fq-rate=0.
Towards issue #467 and related to issue #325.
* Move --fq-rate in the help text to be just below --b, tweak wording.
* Sigh. One more tweak on help text.
Some day I probably need to review and rewrite the whole thing.
Still working towards #467.
This reverts commit f1e62c8d48.
Right idea, but it turns out to be a pretty high-impact change.
Need to rethink this, maybe with a more intelligent implementation
that checks the interface (or path?) MTU.
with default parameters.
A UDP payload of 1452, plus an 8-byte UDP header, plus a 40-byte IPv6
header, results in a 1500 byte IP packet. The IPv4 header is smaller
at 20 bytes.
It was therefore possible to have multiple levels of pacing happening,
which resulted in very nicely smoothed traffic, but wasn't really
the original design.
Do pacing correctly in iperf_check_throttle() and remove a hack in
iperf_send() where we were explicitly checking for the type of
pacing, but didn't really need to.
It turns out that with UDP and only-FQ pacing, iperf3 sends and throws
packets on the floor as fast as it can. This isn't really desirable,
and probably not what was wanted in a test anyway, so if we're not
doing TCP tests, force the use of application-level pacing.
On Linux it's possible to set the socket buffer to one size but
(correctly it seems) get back some larger size up to 2x what you
asked for (see tcp(7)).
While here, make related debugging output more useful.
Fixes (again) #356.
First, realize that we've been setting the congestion control (CC)
algorithm unnecessarily; rather than doing it for all listening or
connecting sockets, do it just for those sockets that are being used
for TCP test streams.
Record the CC algorithm in use (this handles the case where a CC algorithm
hasn't been specified), and have the client and server exchange this
information.
Report the CC algorithms that were used (note that it's theoretically
possible for the two ends of the test to be using different algorithms,
if no algorithm was explicitly specified and the two end hosts have
different defaults, or if one side allows setting the CC algorithm and
the other doesn't).
Committing to a branch to make it easier to test this code on a
wider combination of systems.
This enhancement will automatic check and test the changes done. It will improve the testing capability of component as build can be tested at every pull request before commit.
To enable this feature , you will have to login in https://travis-ci.org/. using github account
and enable the travis feature for the iperf library.
Many other github projects use this feature. Hope it will help iperf too.
When a test in in progress and the client completely disappears, both the control socket and the stream sockets are left around forever. Patch modified from another patch submitted by mkall to add closing of the data stream sockets.