When running multiple parallel streams, the specified port number
is incremented for each successive stream.
Signed-off-by: Kevin Constantine <kevin.constantine@gmail.com>
with system header <locale.h>.
This apparently fixes problems on an ARM build, but this was generally
broken anyway. It's slightly amazing this didn't cause problems before;
perhaps we never used <locale.h> before?
Addresses #203.
UDP tests store a packet sequence number in the packets to detect loss
and ordering issues. This sequence number is a 32-bit signed integer,
which can wrap during very long-running UDP tests. This change adds
an option (defaulting to off) which uses a 64-bit unsigned integer to
store this quantity in the packet. The option is specified on the
client side; the server must support this feature for proper
functioning (older servers will interoperate with newer clients, as
long as --udp-counters-64-bit is not used).
The default might be changed in a future version of iperf3.
As a part of this change, the client sends its version string to the
server in the parameter block.
Uses a public-domain compatibility shim for 64-bit byte order
conversions. There are probably some additional platforms that need
to be supported, in particular Solaris. We might add some
configure-time checks to only enable this feature on platforms where
we can support the byte-order conversions.
This change is not well-tested.
Towards issue #191.
By design, an iperf3 server only runs one test at a time. New
connections from other clients (during an existing test) are
rejected. A problem is that the server code that rejects the test
tries (for some reason) to read the cookie from the client, even
though it's going to reject the connection anyway.
A way to break an existing test is: With a test running, make a TCP
connection to the server's control port (this can easily be done with
a telnet client). The server will hang in a blocking read call trying
to read the cookie from a non-existent client, while the test is
essentially frozen.
The fix is to remove the attempted read of the cookie.
Fixes#202.
We support using k, m, and g as suffices on input values. In most
cases these are 2-based suffixes (i.e. K == 1024) because they are
sizes of objects. In the case of rates, we need to use 10-based
suffices (i.e. K == 1000).
We do this by implementing (using copy-and-paste) a unit_atof_rate()
subroutine that parses strings similarly to unit_atof but using
10-based suffices instead.
Fixes#173.
When we do TCP tests and specify the socket buffer size, MSS, or
TCP no delay option, the iperf3 server destroys the socket it was
using to listen for the control connection and opens up a new
listening socket for the test's data connections. This is (I think)
to make sure that the data connections all have the correct TCP
parameters.
When we re-create the listening socket, we also need to go through
the binding logic again (with all of the address family selectiion,
etc. goop). The bug fixes that were a part of issue #193 need to
be ported to this code as well.
This problem only affects TCP tests, because for other protocols,
the listening socket for data cannot be the same listening sock as
for the control connection.
While here, add some comments so anybody trying to understand this
code will have an easier time.
Based on patch by: @i2aaron
setsockopt(3) returns an error if passing 0 to this option (which
we do if no address family is specified when we bind to the wildcard
address, say by invoking "iperf3 -s" with no other options). This
is because OpenBSD explicitly does not support IPv4-mapped addresses,
so even though the IPV6_V6ONLY socket options exists, it only works
with a non-zero argument.
Fixes#196.
Should fix#177, in which compilation failed on older Solaris systems
that didn't have it. This is a different approach than a patch
suggested in that issue.
Weakly regression-tested on other platforms (test this by specifying
-6, -4, or neither to the server when binding to the wildcard address,
and seeing if a client can connect with various of -6, -4, or neither).
On CentOS 6 and MacOS, if no address family was specified, we'd
get back an IPv4 address from getaddrinfo(3), with the result that
we couldn't accept IPv6 connections in the default server configuration.
There was an earlier attempt at fixing this problem that caused
Issue #193. This change is a follow-up fix to that issue.
While here, put lots of comments around the fix so we remember
why we're doing these shenanigans.
If specifying -B with an IPv4 literal address or with an FQDN that
resolved to an IPv4 address, but we had not explicitly specified an
address family with -4, we failed to set up the socket correctly
because we assumed binding to an IPv6 address, and instead (after some
error spewage) wound up binding to wildcard address.
The fix in this commit has multiple parts: First, if the address
family hasn't been explictly specified, don't force AF_INET6 in the
hints to getaddrinfo(3). AF_UNSPEC should generate the correct
(according to RFC 6724) behavior.
Second, iperf_reset_test() should not discard members that were passed
from command-line parameters, because that alters the behavior of the
iperf3 when it tries to recreate the listening socket. In the failure
situation described in this issue (and possibly other as well), the
value of -B gets discarded, so on subsequent attempts to set up the
listening socket it just binds to the wildcard address.
While here, fix on-line help related to the -B option to match
reality.
Note that we're not completely in compliance with RFC 6724, which
states that we should actually try all of the addresses in returned by
getaddrinfo(3), rather than just the first one.
Fixes Issue #193.
The various "connected" structures were just dumped into the "start"
structure. This caused problems if there were multiple connections
(i.e. multiple parallel streams), because the "connected" structures
would overwrite themselves. Instead, make these structures members
of a "connected" array.
This is technically an incompatible API change, but the prior behavior
was unusable.
Discovered and fix suggested by: @i2aaron
An open(2) call had two arguments instead of the required three.
While here, replace a hard-coded mode in a different open(2) call
with symbolic constants for readability.
Fixes#183.
Submitted by @ssahani.
retrieve (most of) the output emitted by the server.
If the server was invoked with the --json flag, the output will be in
JSON, otherwise it will be in the human-readable format.
If the client was invoked with the --json flag, the output will be
contained within the JSON output structure, otherwise it will be
appended (in whatever format) to the bottom of the human-readable
output.
Because of the sequencing of the output generation and display, the
server-side output includes only the starting output, interval
statistics and summaries, but not the overall summaries. (The overall
summaries were already displayed in the client's output.)
Towards issue #160.
Only do -Wall by default if on GCC (or something that looks like
GCC, such as clang/llvm).
Turn on -Werror so we can get some better error-checking, but
we also need -Wno-deprecated-declarations at least for MacOS,
because daemon(3) is deprecated starting with MacOS 10.5.
Fixes#174 (I think).
Submitted by: @marksolaris