Iperf Windows Gui

6 min read

  1. Iperf Windows Guide
  2. Windows Gui Iperf 3

Stanley Soman, Senior Security Cloud Support Engineer, Team Lead

Troubleshoot speed and throughput issues with iPerf

Iperf Windows Guide

Two of the most common network characteristics we look at when investigating network-related concerns are speed and throughput. You may have experienced the following scenario yourself: You just provisioned a new bad-boy server with a gigabit connection in a data center on the opposite side of the globe. You begin to upload your data. To your shock, you see “Time Remaining: 10 Hours.”

“What’s wrong with the network?” you wonder. The traceroute and MTR look fine—but where’s the performance and bandwidth you’re paying for?

This issue is all too common and it has nothing to do with the network. In fact, the culprits are none other than TCP and the laws of physics.

In data transmission, TCP sends a certain amount of data and then pauses. To ensure proper delivery of data, it doesn’t send more until it receives an acknowledgment from the remote host that all data was received. This is called the “TCP Window.” Data travels at the speed of light, and typically, most hosts are fairly close together. This “windowing” happens so fast that we don’t even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgment from the remote host, reducing overall throughput. This effect is called “Bandwidth Delay Product,” or BDP.

PowerShell Iperf GUI. A PowerShell script to start iperf and show the output (similar to Jperf) (This is not a finished product but it generally working) ChangeLog Version 3.1. IPerf 3 Support Added; Version 3.0. New WPF UI; New iPerf 2 version; Many Fixes; To do list. Add iperf3 option (no CSV support problem) Add more iperf parameters to. The primary goal of iPerf is to help in tuning TCP connections over a particular path. The most fundamental tuning issue for TCP is the TCP window size, which controls how much data can be in the network at any one point. If it is too small, the sender will be idle at times and get poor performance. IPerf is a simple, open source, command-line, network diagnostic tool that you install on two endpoints which can run on Linux, BSD, or Windows platforms. One side runs in a “server” mode, listening for requests; the other end runs “client” mode, sending data.

We can overcome BDP to some degree by sending more data at a time. We do this by adjusting the “TCP Window”—telling TCP to send more data per flow than the default parameters. Each OS is different and the default values will vary, but most all operating systems allow tweaking of the TCP stack and/or using parallel data streams.

So what is iPerf, and how does it fit into all of this?

What is iPerf?

iPerf is a simple, open source, command-line, network diagnostic tool that you install on two endpoints which can run on Linux, BSD, or Windows platforms. One side runs in a “server” mode, listening for requests; the other end runs “client” mode, sending data. When activated, it tries to send as much data down your pipe as it can, spitting out transfer statistics as it does. What’s so cool about iPerf is that you can test in real time any number of TCP window settings—even using parallel streams. There’s even a Java-based GUI you can install that runs on top of it called JPerf (JPerf is beyond the scope of this article, but I recommend looking into it). What’s even cooler is that because iPerf resides in memory, there are no files to clean up.

Windows Gui Iperf 3

  1. This video will show you how to use iperf to test your local network (LAN) speed in Windows 10. You will need 2 computers on the same network, and the IP Add.
  2. IPERF is a professional bandwidth test tool for qualifying performance of corporate networks and the Internet. Both versions: interactive graphical and command-line for batch operations are provided. Check out the videos below for IPERF Charts with UI and command-line version. IPERF3 - compatible GUI and Charts.
  3. The sable version of iperf is iperf3. It is free software (BSD license) you can download from iperf.fr. Here is a quick link to the download page. You will find an iperf for any operating system and architecture you need. In this guide, we are using iperf3 on Windows (64 bit), but the tutorial on how to use iperf is the same for any OS.

How do I use iPerf?

You can quickly download iPerf here. It uses port 5001 by default, and the bandwidth it displays is from the client to the server. Each test runs for 10 seconds by default, but virtually every setting is adjustable. Once installed, simply bring up the command line on both of the hosts and run these commands.

On the server side:

On the client side:

The output on the client side will look like this:

------------------------------------------------------------

Client connecting to 10.10.10.5, TCP port 5001

TCP window size: 16.0 KByte (default)

------------------------------------------------------------

[ 3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001

[ ID] Interval Transfer Bandwidth

[ 3] 0.0- 10.0 sec 10.0 MBytes 1.00 Mbits/sec

There are a lot of things we can do to make this output better, with more meaningful data. For example, let’s say we want the test to run for 20 seconds instead of 10 (-t 20), and we want to display transfer data every 2 seconds instead of the default of 10 (-i 2), and we want to test on port 8000 instead of 5001 (-p 8000). For the purposes of this exercise, let’s use those customizations as our baseline. This is what the command string would look like on both ends:

Client side:

------------------------------------------------------------

Client connecting to 10.10.10.5, TCP port 8000

TCP window size: 16.0 KByte (default)

------------------------------------------------------------

[ 3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000

[ ID] Interval Transfer Bandwidth

[ 3] 0.0- 2.0 sec 6.00 MBytes 25.2 Mbits/sec

[ 3] 2.0- 4.0 sec 7.12 MBytes 29.9 Mbits/sec

[ 3] 4.0- 6.0 sec 7.00 MBytes 29.4 Mbits/sec

[ 3] 6.0- 8.0 sec 7.12 MBytes 29.9 Mbits/sec

[ 3] 8.0-10.0 sec 7.25 MBytes 30.4 Mbits/sec

[ 3] 10.0-12.0 sec 7.00 MBytes 29.4 Mbits/sec

[ 3] 12.0-14.0 sec 7.12 MBytes 29.9 Mbits/sec

[ 3] 14.0-16.0 sec 7.25 MBytes 30.4 Mbits/sec

[ 3] 16.0-18.0 sec 6.88 MBytes 28.8 Mbits/sec

[ 3] 18.0-20.0 sec 7.25 MBytes 30.4 Mbits/sec

[ 3] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec

Server side:

------------------------------------------------------------

Server listening on TCP port 8000

TCP window size: 8.00 KByte (default)

------------------------------------------------------------

[852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316

[ ID] Interval Transfer Bandwidth

[ 4] 0.0- 2.0 sec 6.05 MBytes 25.4 Mbits/sec

[ 4] 2.0- 4.0 sec 7.19 MBytes 30.1 Mbits/sec

[ 4] 4.0- 6.0 sec 6.94 MBytes 29.1 Mbits/sec

[ 4] 6.0- 8.0 sec 7.19 MBytes 30.2 Mbits/sec

[ 4] 8.0-10.0 sec 7.19 MBytes 30.1 Mbits/sec

[ 4] 10.0-12.0 sec 6.95 MBytes 29.1 Mbits/sec

[ 4] 12.0-14.0 sec 7.19 MBytes 30.2 Mbits/sec

[ 4] 14.0-16.0 sec 7.19 MBytes 30.2 Mbits/sec

[ 4] 16.0-18.0 sec 6.95 MBytes 29.1 Mbits/sec

[ 4] 18.0-20.0 sec 7.19 MBytes 30.1 Mbits/sec

[ 4] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec

There are many, many other parameters you can set that are beyond the scope of this article, but for our purposes, the main use is to prove out our bandwidth. This is where we’ll use the TCP window options and parallel streams. To set a new TCP window, you use the -w switch, and you can set the parallel streams by using -P.

Increased TCP window commands:

Server side:

Client side:

Here are the iPerf results from two IBM Cloud file servers: one in Washington, D.C., acting as client, the other in Seattle acting as server:

Client side:

------------------------------------------------------------

Client connecting to 10.10.10.5, TCP port 8000

TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)

------------------------------------------------------------

[ 3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001

[ ID] Interval Transfer Bandwidth

[ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec

[ 3] 2.0- 4.0 sec 28.5 MBytes 120 Mbits/sec

[ 3] 4.0- 6.0 sec 28.4 MBytes 119 Mbits/sec

[ 3] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec dell optiplex 755 video drivers for windows xp download

[ 3] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec

[ 3] 10.0-12.0 sec 29.0 MBytes 122 Mbits/sec

[ 3] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec

[ 3] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec

[ 3] 16.0-18.0 sec 27.9 MBytes 117 Mbits/sec

[ 3] 18.0-20.0 sec 29.0 MBytes 122 Mbits/sec

[ 3] 0.0-20.0 sec 283 MBytes 118 Mbits/sec

Server side:

------------------------------------------------------------

Server listening on TCP port 8000

TCP window size: 1.00 MByte

------------------------------------------------------------

[ 4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903

[ ID] Interval Transfer Bandwidth

[ 4] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec

[ 4] 2.0- 4.0 sec 28.6 MBytes 120 Mbits/sec

[ 4] 4.0- 6.0 sec 28.3 MBytes 119 Mbits/sec

[ 4] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec

[ 4] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec

[ 4] 10.0-12.0 sec 29.0 MBytes 121 Mbits/sec

[ 4] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec

[ 4] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec

[ 4] 16.0-18.0 sec 28.0 MBytes 117 Mbits/sec

[ 4] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec

[ 4] 0.0-20.0 sec 283 MBytes 118 Mbits/sec

We see here that by increasing the TCP window from the default value to 1MB (1024k), we achieved around a 400% increase in throughput over our baseline. Unfortunately, this is the limit of this OS in terms of window size. So what more can we do? Parallel streams! With multiple simultaneous streams, we can fill the pipe close to its maximum usable amount.

Parallel stream command:

Client side:

------------------------------------------------------------

Client connecting to 10.10.10.5, TCP port 8000

TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)

------------------------------------------------------------

[ ID] Interval Transfer Bandwidth

[ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec

[ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec

[ 7] 0.0- 2.0 sec 25.6 MBytes 107 Mbits/sec

[ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec

[ 5] 0.0- 2.0 sec 25.8 MBytes 108 Mbits/sec

[ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec

[ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec

[SUM] 0.0- 2.0 sec 178 MBytes 746 Mbits/sec

[ 7] 18.0-20.0 sec 28.2 MBytes 118 Mbits/sec

[ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec

[ 5] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec

[ 4] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec

[ 3] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec

[ 9] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec

[ 6] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec

[SUM] 18.0-20.0 sec 200 MBytes 837 Mbits/sec

[SUM] 0.0-20.0 sec 1.93 GBytes 826 Mbits/sec

Server side:

------------------------------------------------------------

Server listening on TCP port 8000

TCP window size: 1.00 MByte

------------------------------------------------------------

[ 4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903

[ ID] Interval Transfer Bandwidth

[ 5] 0.0- 2.0 sec 25.7 MBytes 108 Mbits/sec

[ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec

[ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec

[ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec

[ 10] 0.0- 2.0 sec 25.9 MBytes 108 Mbits/sec

[ 7] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec

[ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec

[SUM] 0.0- 2.0 sec 178 MBytes 747 Mbits/sec

[ 4] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec

[ 5] 18.0-20.0 sec 28.3 MBytes 119 Mbits/sec

[ 7] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec

[ 10] 18.0-20.0 sec 28.1 MBytes 118 Mbits/sec

[ 9] 18.0-20.0 sec 28.0 MBytes 118 Mbits/sec

[ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec

[ 6] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec

[SUM] 18.0-20.0 sec 200 MBytes 838 Mbits/sec

[SUM] 0.0-20.1 sec 1.93 GBytes 825 Mbits/sec

As you can see from the tests above, we increased throughput from 29Mb/s with a single stream and the default TCP Window to 824Mb/s using a higher window and parallel streams. On a gigabit link, this about the maximum throughput one could hope to achieve before saturating the link and causing packet loss. We proved out the network and verified bandwidth capacity was not an issue. From that conclusion, we focused on tweaking TCP to get the most out of the network.

Gui

You can also do UDP tests using iPerf for circumstances that require it. To utilize UDP instead of TCP for iPerf testing, you would have to simply use the -u flag. It is to be used with the -b flag for UDP Bandwidth. The UDP bandwidth would be sent at bits/sec. To test a 1000Mbps NIC, you can use -b flag with a value of 1000M to set max UDP bandwidth at 1000 Mbit/sec or 1 Gbit/sec. The default is 1 Mbit/sec.

Here is an example:

We will never get 100% out of any link. Typically, 90% utilization is about the real-world maximum anyone will achieve. If you get any more, you’ll begin to saturate the link and incur packet loss. IBM Cloud doesn’t directly support iPerf, so it’s up to you install and play around with. It’s such a versatile and easy-to-use little piece of software that we think is invaluable.

Original Article written by Andrew Tyler. Updated by Stanley Soman.

Follow IBM Cloud

Be the first to hear about news, product updates, and innovation from IBM Cloud.

Email subscribeRSS

IBM Cloud Technologies