Add timeout to the UDP socket. Without it client would block infinitely
if creating a control connection succeeds, but UDP packets are dropped
by firewall.
size.
This appears to be necessary on some long, high-bandwidth paths
to get sane results, either by reducing packet loss or by somehow
allowing the sending host of a test to go faster.
Fixes#219.
This value is available on the sender side, expressed in
microseconds. It's available in the JSON output.
In the JSON output we also output the maximum observed RTT
per-stream. Note that since the observation interval is many times
the RTT, it's not clear how good this value would be at capturing the
largest computed RTT value over the lifetime of each stream.
While here, also determine the maximum observed snd_cwnd value over
the lifetime of each stream.
This all works pretty well on Linux, but on FreeBSD (which should
theoretically be supported) we don't do a good job of supporting the
tcp_info structure. We need to make this code a lot more portable,
rather than just assuming the world of platforms is "Linux"
vs. "everything else". Fixing this requires some rearchitecting of
the way that we retrieve, compute, and print statistics.
Part of a fix for #215.
For UDP over IPv4, this is the maximum IPv4 packet size (65535) minus
the size of the IPv4 and UDP headers, arriving at 65507.
In theory for a system implementing IPv6 jumbogram support, there is
no maximum packet size for UDP. In practice we've observed with
CentOS 5 a limitation of 65535 - 8, which is dictated by the size
field in the UDP header (it has a maximum value of 65535, but needs
to count both payload and header bytes, thus subtracting off the 8
bytes for the UDP header).
We take the most conservative approach and use the 65507 value
for UDP / IPv4.
This is (I believe) the last part of issue #212.
We need this to permit a UDP receiving iperf3 server to listen on its
control channel.
The fix for non-blocking sockets basically makes sure that if we do a
read on a non-blocking sockets, but there's no data, the UDP processing
code does *not* try to do the normal protocol packet processing on the
non-existent packet.
This is part of an ongoing fix for issue #212 but also should have been
a part of the fix for issue #125.
This can happen if the server gets into a weird state (see the test
cases for reproducing issue #212). We need to do a couple of checks
to make sure we're not dereferencing NULL pointers (yay C).
While here, also fix up a couple of related output glitches, where
in this case we can emit some invalid JSON (NaN values, such as what
you get if there's a division by zero, are not valid JSON).
Part of a fix in progress for #212.
Also if we try to compile on an unsupported platform, emit some code
in portable_endian.h that at least has a chance of compiling, rather
than erroring out right away.
For #191.
Fixed compilation error in src/cjson.c observed in Visual Studio 2013.
This problem didn't cause breakage on any other platform, but this change should have been present anyway.
(cherry picked from commit dd2968f21e641945026db4bbdf02b3c13f833d74)
Signed-off-by: Bruce A. Mah <bmah@es.net>
When running multiple parallel streams, the specified port number
is incremented for each successive stream.
Signed-off-by: Kevin Constantine <kevin.constantine@gmail.com>
with system header <locale.h>.
This apparently fixes problems on an ARM build, but this was generally
broken anyway. It's slightly amazing this didn't cause problems before;
perhaps we never used <locale.h> before?
Addresses #203.
UDP tests store a packet sequence number in the packets to detect loss
and ordering issues. This sequence number is a 32-bit signed integer,
which can wrap during very long-running UDP tests. This change adds
an option (defaulting to off) which uses a 64-bit unsigned integer to
store this quantity in the packet. The option is specified on the
client side; the server must support this feature for proper
functioning (older servers will interoperate with newer clients, as
long as --udp-counters-64-bit is not used).
The default might be changed in a future version of iperf3.
As a part of this change, the client sends its version string to the
server in the parameter block.
Uses a public-domain compatibility shim for 64-bit byte order
conversions. There are probably some additional platforms that need
to be supported, in particular Solaris. We might add some
configure-time checks to only enable this feature on platforms where
we can support the byte-order conversions.
This change is not well-tested.
Towards issue #191.
By design, an iperf3 server only runs one test at a time. New
connections from other clients (during an existing test) are
rejected. A problem is that the server code that rejects the test
tries (for some reason) to read the cookie from the client, even
though it's going to reject the connection anyway.
A way to break an existing test is: With a test running, make a TCP
connection to the server's control port (this can easily be done with
a telnet client). The server will hang in a blocking read call trying
to read the cookie from a non-existent client, while the test is
essentially frozen.
The fix is to remove the attempted read of the cookie.
Fixes#202.