commit 2dc03630a736be2ae9f64823aabb5776e7074c2a
Merge: 61e325c 0da552c
Author: Bruce A. Mah <bmah@es.net>
Date: Thu May 26 09:40:58 2016 -0700
Merge branch 'master' into issue-325
commit 61e325c5d0a4e7a9823221ce507db0f478fc98b5
Merge: 227992f ccbcee6
Author: Bruce A. Mah <bmah@es.net>
Date: Thu May 26 11:09:54 2016 -0400
Merge branch 'issue-325' of github.com:esnet/iperf into issue-325
Conflicts:
src/iperf3.1
commit 227992f366e7f4895b6762011576ba22a42a752e
Author: Bruce A. Mah <bmah@es.net>
Date: Thu May 26 11:07:01 2016 -0400
Don't set SO_MAX_PACING_RATE if the rate is 0. Also tweak some help text.
Towards #325, in response to feedback from @bltierney.
commit ccbcee6366d50ec632fc00eb11fde8a886f8febe
Author: Bruce A. Mah <bmah@es.net>
Date: Tue May 24 09:19:41 2016 -0700
Fix manpage formatting for consistency.
commit 90ac5a9ce09bd746ca5f943a8226ab864da3ebf8
Author: Bruce A. Mah <bmah@es.net>
Date: Tue May 24 12:14:16 2016 -0400
Add some documentation for fair-queueing per-socket pacing.
For #325.
commit 5571059870f7aefefb574816de70b6406848888f
Author: Bruce A. Mah <bmah@es.net>
Date: Tue May 24 11:55:44 2016 -0400
Change the fair-queueing socket pacing logic in response to feedback.
By default, on platforms where per-socket pacing is available, it
will be used. If not available, iperf3 will fall back to application-
level pacing.
The --no-fq-socket-pacing option can be used to forcibly disable
fair-queueing per-socket pacing. (The earlier --socket-pacing option
has been removed.)
Tested on CentOS 7, more testing on other platforms is required to
be sure it didn't break the old application-level pacing behavior.
For #325.
commit 3e3f506fe9f375a5771c9e3ddfe8677c1a7146e7
Merge: 50a379e 3b23112
Author: Bruce A. Mah <bmah@es.net>
Date: Tue May 24 09:54:39 2016 -0400
Merge branch 'master' into issue-325
commit 50a379eddfa89d1313d2aeeb62a6fbc82f00ea17
Author: Bruce A. Mah <bmah@es.net>
Date: Sat Apr 16 02:55:42 2016 -0400
Regen.
commit 200d3fe3917b3d298bdf52a0bde32c47cf2727b0
Author: Bruce A. Mah <bmah@es.net>
Date: Sat Apr 16 02:41:32 2016 -0400
Checkpoint for initial work on #325 to add socket pacing.
This works only on Linux and depends on the availability of
the SO_MAX_PACING_RATE socket option and the fq queue discipline.
Use --socket-pacing to use SO_MAX_PACING_RATE instead of the
default iperf3 user-level rate limiting; in either case, the
--bandwidth parameter controls the desired rate.
Lightly tested with both --tcp and --udp, normal and --reverse.
Real testing requires analysis of packet timestamps between
multiple hosts.
In file iperf_util.c:
Function 'va_start' is called at line:327. But, 'va_end' is not called before returning from function 'iperf_json_printf()' at line:352 and line:355.
The va_end performs cleanup for the argp object initialized by a call to va_start. If va_end is not called before a function that calls va_start returns, the behavior is undefined.
Applied Fix: added va_end before returning from the function.
DEREF_AFTER_NULL: pointer ‘test’ at line:77 is passed as an argument to function iperf_delete_pidfile(), in which it is dereferenced at iperf_api.c:2832.
Pointer ‘test’ can be NULL and dereferencing a NULL pointer causes seg-fault.
Applied Fix: pointer ‘test’ is checked for NULL before passing it to function iperf_delete_pidfile().
* Add fix for #412
This prevents negative loss counters with UDP when omit is used
* Track the original start time and bytes omitted. This allows the
throttle function to work after the omit timer fires. This is
a fix for issue #419.
* Remove changes to switch the bandwidth to received instead of sent bandwidth
* Roll back bandwidth sent vs received changes
This caused by a combination of the iperf3 build somehow using
the system queue.h on FreeBSD 11 (possibly only on this platform)
and TAILQ_END not being defined in the system queue.h.
Expanding the TAILQ_END macro to NULL seems to solve the problem.
Submitted by: @rbgarga
This was causing some headaches for code trying to parse JSON.
Also revise a prior partial fix that hard-coded 100% loss for the
case of zero packets.
Partially fixes#278.
Merge candidate for 3.0 and 3.1 bugfix branches.
Exit with non-zero exit code if server mode has too many errors.
Properly detect complain about non-numeric arguments to -A, -L, and -S.
Implement range checks for argument to -S.
Fixes#316.
Fix segfault in signal handler on the server if a signal arrives at the "wrong" time.
The change causes the signal handler to use a stack context whose lifetime should be valid the entire time the signal handler is active.
Fixes#257, #258.
The case where we should have been binding the client sockets to
ephemeral ports at a specific address for parallel tests was broken.
Fixes#239
Submitted by: @jfitzgibbon
Solaris implements an (older?) version of the API for SCTP_MAXSEG,
which takes an integer argument rather than a struct sctp_assoc_val.
We need to test for that and handle it appropriately. There are some
signs it doesn't even work correctly if we do this, so quietly ignore
errors that happen if the OS complains it's unsupported.
Also, Solaris doesn't support SCTP_DISABLE_FRAGMENTS even though it
defines the preprocessor symbol for this. Rather than aborting when
we try to unsuccessfully unset this option, just ignore the error.
Lightly tested with SCTP over IPv6 on localhost.
Contains an alternate implementation of previously-submitted patches
to set the maximum segment size and no-delay options.
As a result of this change, SCTP functionality on Linux will generally
require the libsctp library (on CentOS and similar distributions this
is provided by the lksctp-tools RPM).
Part of #131.
Submitted by: Bruce Simpson <bs48@st-andrews.ac.uk>
The problem is that the new byte-ordering macros adopted on master
don't support CentOS 5 because they assumed that any Linux system had
endian(3) support. CentOS 6 (and presumably newer) do, but CentOS 5
doesn't.
So instead we only do glibc endian(3) support if we're on a system
with glibc 2.9 or higher (which is when this functionality was
introduced).
For any other platform that we don't detect (which now includes older
glibc such as CentOS 5), bring back our homebrewed htonll and ntohll
implementation from iperf 3.0.x.
Fixes#224.
Primarily useful for bwctl integration, this is enabled with the -1
and/or --one-off flags.
Fixes#230, based on a patch by @i2aaron.
Signed-off-by: Bruce A. Mah <bmah@es.net>
Add timeout to the UDP socket. Without it client would block infinitely
if creating a control connection succeeds, but UDP packets are dropped
by firewall.
size.
This appears to be necessary on some long, high-bandwidth paths
to get sane results, either by reducing packet loss or by somehow
allowing the sending host of a test to go faster.
Fixes#219.
This value is available on the sender side, expressed in
microseconds. It's available in the JSON output.
In the JSON output we also output the maximum observed RTT
per-stream. Note that since the observation interval is many times
the RTT, it's not clear how good this value would be at capturing the
largest computed RTT value over the lifetime of each stream.
While here, also determine the maximum observed snd_cwnd value over
the lifetime of each stream.
This all works pretty well on Linux, but on FreeBSD (which should
theoretically be supported) we don't do a good job of supporting the
tcp_info structure. We need to make this code a lot more portable,
rather than just assuming the world of platforms is "Linux"
vs. "everything else". Fixing this requires some rearchitecting of
the way that we retrieve, compute, and print statistics.
Part of a fix for #215.
For UDP over IPv4, this is the maximum IPv4 packet size (65535) minus
the size of the IPv4 and UDP headers, arriving at 65507.
In theory for a system implementing IPv6 jumbogram support, there is
no maximum packet size for UDP. In practice we've observed with
CentOS 5 a limitation of 65535 - 8, which is dictated by the size
field in the UDP header (it has a maximum value of 65535, but needs
to count both payload and header bytes, thus subtracting off the 8
bytes for the UDP header).
We take the most conservative approach and use the 65507 value
for UDP / IPv4.
This is (I believe) the last part of issue #212.
We need this to permit a UDP receiving iperf3 server to listen on its
control channel.
The fix for non-blocking sockets basically makes sure that if we do a
read on a non-blocking sockets, but there's no data, the UDP processing
code does *not* try to do the normal protocol packet processing on the
non-existent packet.
This is part of an ongoing fix for issue #212 but also should have been
a part of the fix for issue #125.
This can happen if the server gets into a weird state (see the test
cases for reproducing issue #212). We need to do a couple of checks
to make sure we're not dereferencing NULL pointers (yay C).
While here, also fix up a couple of related output glitches, where
in this case we can emit some invalid JSON (NaN values, such as what
you get if there's a division by zero, are not valid JSON).
Part of a fix in progress for #212.
Also if we try to compile on an unsupported platform, emit some code
in portable_endian.h that at least has a chance of compiling, rather
than erroring out right away.
For #191.
Fixed compilation error in src/cjson.c observed in Visual Studio 2013.
This problem didn't cause breakage on any other platform, but this change should have been present anyway.
(cherry picked from commit dd2968f21e641945026db4bbdf02b3c13f833d74)
Signed-off-by: Bruce A. Mah <bmah@es.net>
When running multiple parallel streams, the specified port number
is incremented for each successive stream.
Signed-off-by: Kevin Constantine <kevin.constantine@gmail.com>
with system header <locale.h>.
This apparently fixes problems on an ARM build, but this was generally
broken anyway. It's slightly amazing this didn't cause problems before;
perhaps we never used <locale.h> before?
Addresses #203.
UDP tests store a packet sequence number in the packets to detect loss
and ordering issues. This sequence number is a 32-bit signed integer,
which can wrap during very long-running UDP tests. This change adds
an option (defaulting to off) which uses a 64-bit unsigned integer to
store this quantity in the packet. The option is specified on the
client side; the server must support this feature for proper
functioning (older servers will interoperate with newer clients, as
long as --udp-counters-64-bit is not used).
The default might be changed in a future version of iperf3.
As a part of this change, the client sends its version string to the
server in the parameter block.
Uses a public-domain compatibility shim for 64-bit byte order
conversions. There are probably some additional platforms that need
to be supported, in particular Solaris. We might add some
configure-time checks to only enable this feature on platforms where
we can support the byte-order conversions.
This change is not well-tested.
Towards issue #191.
By design, an iperf3 server only runs one test at a time. New
connections from other clients (during an existing test) are
rejected. A problem is that the server code that rejects the test
tries (for some reason) to read the cookie from the client, even
though it's going to reject the connection anyway.
A way to break an existing test is: With a test running, make a TCP
connection to the server's control port (this can easily be done with
a telnet client). The server will hang in a blocking read call trying
to read the cookie from a non-existent client, while the test is
essentially frozen.
The fix is to remove the attempted read of the cookie.
Fixes#202.
We support using k, m, and g as suffices on input values. In most
cases these are 2-based suffixes (i.e. K == 1024) because they are
sizes of objects. In the case of rates, we need to use 10-based
suffices (i.e. K == 1000).
We do this by implementing (using copy-and-paste) a unit_atof_rate()
subroutine that parses strings similarly to unit_atof but using
10-based suffices instead.
Fixes#173.
When we do TCP tests and specify the socket buffer size, MSS, or
TCP no delay option, the iperf3 server destroys the socket it was
using to listen for the control connection and opens up a new
listening socket for the test's data connections. This is (I think)
to make sure that the data connections all have the correct TCP
parameters.
When we re-create the listening socket, we also need to go through
the binding logic again (with all of the address family selectiion,
etc. goop). The bug fixes that were a part of issue #193 need to
be ported to this code as well.
This problem only affects TCP tests, because for other protocols,
the listening socket for data cannot be the same listening sock as
for the control connection.
While here, add some comments so anybody trying to understand this
code will have an easier time.
Based on patch by: @i2aaron
setsockopt(3) returns an error if passing 0 to this option (which
we do if no address family is specified when we bind to the wildcard
address, say by invoking "iperf3 -s" with no other options). This
is because OpenBSD explicitly does not support IPv4-mapped addresses,
so even though the IPV6_V6ONLY socket options exists, it only works
with a non-zero argument.
Fixes#196.
Should fix#177, in which compilation failed on older Solaris systems
that didn't have it. This is a different approach than a patch
suggested in that issue.
Weakly regression-tested on other platforms (test this by specifying
-6, -4, or neither to the server when binding to the wildcard address,
and seeing if a client can connect with various of -6, -4, or neither).
On CentOS 6 and MacOS, if no address family was specified, we'd
get back an IPv4 address from getaddrinfo(3), with the result that
we couldn't accept IPv6 connections in the default server configuration.
There was an earlier attempt at fixing this problem that caused
Issue #193. This change is a follow-up fix to that issue.
While here, put lots of comments around the fix so we remember
why we're doing these shenanigans.
If specifying -B with an IPv4 literal address or with an FQDN that
resolved to an IPv4 address, but we had not explicitly specified an
address family with -4, we failed to set up the socket correctly
because we assumed binding to an IPv6 address, and instead (after some
error spewage) wound up binding to wildcard address.
The fix in this commit has multiple parts: First, if the address
family hasn't been explictly specified, don't force AF_INET6 in the
hints to getaddrinfo(3). AF_UNSPEC should generate the correct
(according to RFC 6724) behavior.
Second, iperf_reset_test() should not discard members that were passed
from command-line parameters, because that alters the behavior of the
iperf3 when it tries to recreate the listening socket. In the failure
situation described in this issue (and possibly other as well), the
value of -B gets discarded, so on subsequent attempts to set up the
listening socket it just binds to the wildcard address.
While here, fix on-line help related to the -B option to match
reality.
Note that we're not completely in compliance with RFC 6724, which
states that we should actually try all of the addresses in returned by
getaddrinfo(3), rather than just the first one.
Fixes Issue #193.
The various "connected" structures were just dumped into the "start"
structure. This caused problems if there were multiple connections
(i.e. multiple parallel streams), because the "connected" structures
would overwrite themselves. Instead, make these structures members
of a "connected" array.
This is technically an incompatible API change, but the prior behavior
was unusable.
Discovered and fix suggested by: @i2aaron
An open(2) call had two arguments instead of the required three.
While here, replace a hard-coded mode in a different open(2) call
with symbolic constants for readability.
Fixes#183.
Submitted by @ssahani.
retrieve (most of) the output emitted by the server.
If the server was invoked with the --json flag, the output will be in
JSON, otherwise it will be in the human-readable format.
If the client was invoked with the --json flag, the output will be
contained within the JSON output structure, otherwise it will be
appended (in whatever format) to the bottom of the human-readable
output.
Because of the sequencing of the output generation and display, the
server-side output includes only the starting output, interval
statistics and summaries, but not the overall summaries. (The overall
summaries were already displayed in the client's output.)
Towards issue #160.
Only do -Wall by default if on GCC (or something that looks like
GCC, such as clang/llvm).
Turn on -Werror so we can get some better error-checking, but
we also need -Wno-deprecated-declarations at least for MacOS,
because daemon(3) is deprecated starting with MacOS 10.5.
Fixes#174 (I think).
Submitted by: @marksolaris
This definitely affected FreeBSD, which breaks POSIX.1 by not
setting CLOCKS_PER_SEC to 1000000 (see clock(3)). At this point
I can't tell if any other platforms were affected by this.
transfer.
Note that the sender can either be the client or the server depending
on whether --reverse is used.
This fixes some problems with UDP transfers getting severely confused
and (wrongly) complaining about packets arriving out of order.
Related to issue #125.
algorithm selection) option to work on FreeBSD for free, starting with
FreeBSD 9. Update various documentation places to note this. One
specifies the congestion algorithm in the same was on Linux, although
the names of the algorithms are (at least in the general case) different.
"sysctl net.inet.tcp.cc" on FreeBSD provides a list of available
algorithms, which are implemented as loadable kernel modules.
Rename the --linux-congestion long option to --congestion (retaining
the old option as a deprecated synonym).
not including it.
To fix this required us to change config.h to iperf_config.h (to
avoid potential filename collisions with this generic name). Then
iperf.h could include this.
Adjust the existing header file inclusions to track this, and also
canonicalize their inclusion to be at the top of *.c files.
As with several other recent commits, don't check explicitly for an
OS platform, but rather detect the various API bits that are used
to implement CPU affinity setting.
We check at configure-time to see if IPV6_FLOWLABEL_MGR is defined
in <linux/in.6>, if it is we set a HAVE_FLOWLABEL CPP symbol to
turn on conditional compilation of the support for this feature.
Rather than checking for anything Linux-specific at configure-time,
see if TCP_CONGESTION is defined in <netinet/tcp.h> and if so define
a CPP variable HAVE_TCP_CONGESTION, which we then use to enable
conditional compilation of the code for this feature.
These macros were never used anywhere in iperf3 anyways, and
conflicted with macro definitions that were in FreeBSD's system
headers.
Bump copyright date and add a comment to inclusion guard while here.
Rather than doing checks for platforms that we believe support SCTP,
instead look for an indication (notably the presence of <netinet/sctp.h>)
that it's supported. This makes the conditionals for SCTP more obvious.
In addition, it opens up the possibility that SCTP might work on some
new OS that's not FreeBSD or Linux.
This change may force some additional build-time requirements on Linux,
such as lksctp-tools-devel on CentOS / Fedora or libsctp-dev on
Ubuntu.
Committing this first cut for review and to enable testing on multiple
platforms. So far this works correctly on Linux (SCTP support) and
MacOS (no SCTP support).
Squashed commit of the following:
commit 23ef0d047fb5396df671be9245f7872153fc299c
Author: Bruce A. Mah <bmah@es.net>
Date: Mon Apr 7 13:35:29 2014 -0700
Add a few API calls to the client-side example program so we can
exercise recently-added JSON-related functionality.
commit 5f8301e8d0380133d533da9b2e39ca4ac522e1c3
Author: Bruce A. Mah <bmah@es.net>
Date: Mon Apr 7 13:16:39 2014 -0700
Revert part of earlier change.
We still want to save the JSON for libiperf consumers that might want it,
but preserve the prior behavior of writing that JSON to stdout. This
maintains (roughly) the behavior of older libiperf, in which libiperf
consumers (such as the iperf3 executable) do not need to explicitly print
the JSON if that's all they're doing with it.
commit 173dcdb05867af00103205bfe39d1b71e18689e9
Author: Bruce A. Mah <bmah@es.net>
Date: Tue Mar 25 13:55:45 2014 -0700
Update manpage for newly-added library calls.
Bump document date while here.
Part of Issue #147.
commit 51a275de9463febc440d41cee9d971fcd381e01c
Author: Bruce A. Mah <bmah@es.net>
Date: Tue Mar 25 13:30:09 2014 -0700
Allow consumers of libiperf3 to get the JSON output for a just-completed test.
This changes the behavior of iperf_json_finish() so that it no longer
outputs JSON output, but saves the rendered output in a NUL-terminated
string buffer. After calling iperf_run_server() or iperf_run_client(),
the client application should check iperf_get_test_json_output() to see
if it returns a non-NULL pointer. If so, there is JSON data available
for it to print or otherwise consume. The buffer is automatically
deallocated when the containing iperf_test structure is deallocated
with iperf_free_test().
Also adds a new API call iperf_get_test_outfile() to find the output
FILE* structure.
Modifies the iperf3 application to use the new API. Users of iperf3
will not notice any functional change.
No effect in "normal" output mode (non-JSON).
iperf_parse_arguments(). Basically we need to initialize the
output stream in the iperf_test structure regardless of whether
iperf_parse_arguments() gets called; some programs (in particular
the programs in the examples/ directory and bwctl) don't do this
(and indeed should not need to).
This problem was introduced in the solution for Issue #119; the fix
needs to be merged to any codeline where fixes for Issue #119 go.
This works for both client and server side (in the case of the server,
either for daemon or non-daemon mode).
Consistifies a few places that were using printf instead of iprintf.
Fixes Issue 119.
Use --disable-static or --disable-shared to build only one flavor
of libraries.
Tested on MacOS, FreeBSD, and CentOS 6 Linux.
Resolves#146.
Originally submitted by: @i2aaron
This can happen if the user forces a particular output format that leads
to many digits (6 or more) being printed. The new buffer size is probably
larger than it needs to be, but better safe than sorry.
Fixes Issue 142.
This makes SCTP with default parameters work on CentOS 6; formerly
it was just using the TCP default (128KB) and failing with
a "message too long" error. It might be possible to fix this with
some manipulation of other default values, so that TCP and SCTP
can use the same default message size, but I haven't figure out
what this would be.
This ties up one loose end from Issue 131.
Note this option only has a long option flag; we're running out of
letters for short options.
Based heavily on a patch submitted in Issue 131 (SCTP support for
iperf); I added support for FreeBSD and did some other packaging and
documentation improvements.
We probably shouldn't tie SCTP support to looking specifically for
Linux or FreeBSD; we probably leave support enabled all the time if
possible, possibly with some configure-time checks.
We were computing and printing this in JSON output mode anyway; this
change just exposes this quantity in a human-friendly manner (better
than the first attempt at this) when doing normal output.
Resolves Issue 99 (Additional TCP_INFO items).
The bug and solution are very similar to Issue 126 (fixed in
d7e0c1445c0a). Basically a setsockopt(IPV6_V6ONLY) call had a bogus
level argument, but we never checked the return value so we never
noticed this.
As with the prior issue, the fix is to unbreak the setsockopt() call,
and add some error checking to detect future breakage.
This bug was on a codepath that only got called if non-default values
were set for the socket buffer size, the MSS, or the NODELAY parameter.
It might have affected some FreeBSD tests, but really only got noticed
when debugging some other code that was (probably) cut-and-pasted
from this code.
mapped_v4_to_regular_v4() committed the sin of doing strcpy(3) on
overlapping buffers. This caused an abort on MacOS X 10.9. Fix this
to use memcpy(3) instead, which handles overlapping buffers correctly.
Issue: 135 (Crash on OS X when using IP address)
Apparently older kernels don't support TCP_CONGESTION, so we can't
just test for defined(linux) to know if we can use this sockopt or not.
This change unbreaks the build on (notably) CentOS 5.
Mostly this change consists of adding FreeBSD-specific code to handle
this feature. The concepts and system calls are very similar to what's
already done for Linux. One difference is that on FreeBSD, the CPU
affinity mask is saved before -A processing and restored afterwards.
This causes a slight change to the function signatures for
iperf_setaffinity() and iperf_clearaffinity() (these functions
however are not documented as a part of the libiperf3 API).
Slightly improve some of the documentation for the -A command line
option, to hopefully stave off some of the questions about this
feature.
Mostly based on a submitted patch.
Issue: 128 (better error message for CPU affinity failure)
Submitted by: Susant Sahani <ssahani@redhat.com>
When the client process gets interrupted, both the client and server
dump out accumulated interval statistics, as well as a partial set of
summary statistics (basically each side dumps what it has, but without
the exchange of information that usually happens at the end of a
normal run).
If the server process gets interrupted, the server dumps out its
accumulated interval and summary statistics as above. The client does
this as well in the -R case, but exits with a "Broken pipe" in the non
-R case (this behavior was present all along; it was not introduced in
this change). More investigation will be needed to understand the
client behavior.
Bump copyright dates in a few places.
Issue: 132 (signal handler for API calls)
Discussed with: aaron@internet2.edu
In -R mode, the test consists of the server sending to the client
until the client tells it to stop by setting the test state to
TEST_END via the control socket. However once the client changes the
test state, it stopped reading the incoming test data from the server.
In many (but not all) scenarios this could result in the server
filling up its send window (thus blocking on writes) before the
TEST_END message arrived from the client. At this point the server
was hanging waiting for the client to drain its data connection(s),
and the client was waiting for the server to send a state change
message for EXCHANGE_RESULTS.
Bump copyright date while here.
This fix handles at least part of...
Issue: 129 (iperf3 hangs with -R and -Z flags)
Also remove a couple of places where we were saving and restoring errno
where we didn't need to.
Submitted by: Susant Sahani <ssahani@redhat.com>
Issue: 121 (TCP congestion control algorithm support for client)
(It remains in the JSON output.)
We have some issues we need to resolve about the formatting and
representation of this (and other future values that we might be
adding).
Rip out the tcpi_sacked support...it doesn't really keep a cumulative
total of SACKs received like we thought it did (it's instantaneous
state).
Convert tcpi_snd_cwnd (originally expressed in segments) to octets before
printing.
Re-work internal APIs for functions to get stuff out of tcp_info...rather
than doing a getsockopt() call per value, grab the values out of a
saved copy of the tcp_info structure (which we were getting in almost
every case anyway).
Issue: 99 (Additional TCP_INFO items)
(Formerly it was just accepting IPv6.)
The problem here was that FreeBSD by default wasn't allowing IPv4
mapped addresses on IPv6 sockets, whereas other platforms
(specifically Linux and OS X) both do permit this. We tried to turn
on mapped addresses via a setsockopt(IPV6_V6ONLY) call, but this call
was broken because the level argument was incorrect. We didn't know
about this because we never checked the return value.
Fix this by providing the correct argument to setsockopt(). Add some
error checking to this and one other setsockopt() call, so we at least
don't fail silently in similar situations.
Issue: 126 (FreeBSD: iperf3 -s only accepts IPv6
connections)
This functionality uses some setsockopt(2) calls that unfortunately
don't seem to have an analog on other platforms.
Slightly tweaked version of a patch that was...
Submitted by: ssahani@redhat.com
Issue: 40 (Option for setting Flow Label field in IPv6
header)
This lets us check timers every tenth second instead of every second,
so we can switch out of the more expensive select() mode even with
the default reporting interval of a second.
Possible related work still under consideration:
o Use syslog in daemon mode for output that would normally go to
stdout / stderr.
o Write a PID file.
This is basically the gist of Issue 105.
Also bumped package id from 3.0a4 to 3.0a5.
This changeset consists of a one-line edit to configure.ac, plus
about fifty kilolines of diffs to a bunch of other config files
generated by bootstrap.sh.
having it there may cause the select to return immediately every
time. Which is bad, m'kay?
Also, changed the coding idiom used to keep track of the maximum fd
in the fd sets, to be clearer.
The error numbers sent for SERVER_ERROR state were declared
as ints, and therefore could be 32 or 64 bits depending on
architecture. I changed them to be explicitly 32 bits.
This should be the last of these, I've checked out at every network
read/write call and they look ok.
And bumped the version to 3.0-RC5.
A couple more sizeof issues found and fixed. One of them is
actually another protocol change, but due to a fortuitous accident
it should remain compatible with older versions.
Detailed explanation: When a client attempts to connect to a server that
is already busy, the server is supposed to return ACCESS_DENIED as a
state value. It was doing so, but was writing it as an int, even though
state values are supposed to be signed chars. The client read the value
correctly as a signed char, getting one byte and throwing away the rest.
So why did this ever work? Because ACCESS_DENIED is the value -1, and
any byte of an int -1 equals a signed char -1. If ACCESS_DENIED had been
any other value, this would have been an opvious bug and would have long
since been fixed. As is, it stuck around working by accident until now.
test->state is an explicitly signed char, so the two routines that
manipulate it must use explicitly signed chars too.
One could argue that the two negative state values should have been
positive like the rest, but changing them now would be a protocol change.
On further reflection, last night's seemingly trivial change to
the JSON sending/receiving routines is actually a protocol change,
on some machines, and therefore merits a version number change.
and iperf_run_server, so that API users get it too. Also, call
iperf_errexit with an appropriate message, which in -J mode dumps
out any accumulated JSON data.
The user-visible parts are commented out or return a "not implemented
yet" error message. The other parts are harmless.
We'll come back to this once we figure out how exactly one sets
the Flow Label.
one is the new -Z flag.
- Fixed potential bug in net.c's Nread and Nwrite routines. If they
had ever needed to loop they would have read/written the wrong address,
due to incorrect pointer arithmetic - sizeof(void) is not 1. Fix
was to change the type of the buffer pointer to char*, which also
meant adding casts to some callers.
- Better checking for conflicts between command-line flags - now they
should no longer be order-dependent.
- Added a new -Z / --zerocopy flag, to use a "zero copy" method of
sending data, such as sendfile(2) instead of the usual write(2).
- Renumbered error enum to make inserting new ones easier.
Previously, if you used -n to specify a test sending a specified number
of bytes and -P to send multiple streams in parallel, iperf3 would send
that many bytes on each stream. With this change it just sends the
specified total number of bytes regardless of how many streams are used.