When burst mode is configured for unlimited rate (-b0) but with a
specific packet burst value (e.g. /1000), iperf only sends packets once,
after that the iperf_check_throttle function gets called and sets
green_light=0 due to the rate value being 0 and average calculated rate
always being higher than 0.
The iperf_check_throttle function is designed to be skipped in case the
target rate is unlimited or if a specific burst value was configured,
however this skip is only utilized in one place where the function is
called leading to the situation above.
This can be fixed by moving the "skip throttling" condition directly
inside the iperf_check_throttle function.
Signed-off-by: Ondrej Lichtner <olichtne@redhat.com>
If --bidir option was passed before --client option on command line,
the latter would override ipt->mode parameter of the test back to SENDER
or RECEIVER making the test hang during execution.
Fix this by checking if ipt->bidirectional was set to true in
iperf_set_test_role() function.
The base64 decode will crash on musl c-library builds for OpenWRT
due to the write of the '\0' past the end of the allocated buffer.
Fix other various memory leaks on the authentication code paths.
Fix some memory-free library calls into OpenSSL.
Based heavily on PR #881 originally submitted by @acmay,
with comments from @ralcini.
The TEST_END message is racing with the server_timer_proc timer.
When the RTT is higher than a second, the timer wins the race
and closes the control socket before the results are exchanged.
This results in the client reporting:
"error - control socket has closed unexpectedly"
as reported in GH issue 751.
This change doesn't prevent the race, but significantly increases the
grace period based on a maximum RTT of 4 seconds and accounts for
the ten transitions in the iperf3 state machine.
(cherry picked from commit 34bdddb75194e880e6dbc6dcaa5b5386975c11b3)
(originally submitted by @acooks in #859)
To be able to set the test->outfile to a different file other than default
if using libiperf API.
Since logfile is now opened in iperf_parse_arguments() and this function
may not be used if running iperf using API, define a dedicated function
iperf_open_logfile() and move the opening of logfile into
iperf_run_client() and iperf_run_server() to make sure logfile will be
opened if iperf_parse_arguments() was not called.
In bidirectional mode, if option --get-server-output is set and if
both client and server have --json set to true, client would still print
the json output of server to stdout as a separate piece instead of
including it into client's json output.
This patch fixes this problem, the server's json output would be
appended to client's json field 'server_output_json' as it should be.
If client was started with --repeating-payload option, tell the
server to use the repeating_payload also.
Since repeating_payload is a client specific option at the moment and
we don't tell the server if repeating_payload was set not not,
server always uses randomized patterns in reverse and bidirectional
modes disregarding what patterns the client was told to generate.
So, if client was started with both --repeating-payload and --reverse,
the server would still send the randomized data to the client which
doesn't seem right.
This commit fixes this issue.
Signed-off-by: Sergey Nemov <sergey.nemov@intel.com>
The bug reported in #505 seems to not exist at this time, and
the text added in this manpage change caused some other problems,
a la perfsonar/pscheduler#819.
Fixes#860.
From author's notes (@ben-foxmore):
The current usage of gettimeofday causes issues for us when performing
tests shortly after restarting a system. In our setup, this occurs
often as we restart the system before each test to ensure reliable
results. We already maintain our own version of iperf for some subtle
changes, but this change feels like it might be useful to upstream.
(It's also a reasonable size change, so we'd prefer not maintain it
with each new version of iperf.)
It uses clock_gettime on systems that have it available, and falls
back to gettimeofday when it's not. These two options use different
structures for storing time - clock_gettime uses timespec, and
gettimeofday uses timeval. To provide abstraction to which one is
available, a separate iperf_time struct is defined to store time.
timespec has nanosecond accuracy, while timeval only has microseconds.
For the purposes of iperf, I don't think nanosecond accuracy is
neccesary, so iperf_time only uses microseconds, throwing away any
additional accuracy. Currently I have used the MONOTONIC clock, as I
think we only need a consistent time interval measure.