Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New live-feed, new analysis options and CSV export #18

Merged
merged 7 commits into from
Oct 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 51 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,7 @@
hperf is a tool for active measurements of the maximum achievable bandwidth between N peers, measuring RX/TX bandwidth for each peers.

## What is hperf for
Hperf was made to test networks in large infrastructure. It's highly scalable and cabaple of running parallel tests over
a long period of time.
Hperf was made to test networks in large infrastructure. It's highly scalable and capable of running parallel tests over a long period of time.

## Common use cases
- Debugging link/nic MTU issues
Expand All @@ -19,18 +18,19 @@ a long period of time.
The binary can act as both client and server.

### Client
The client part of hperf is responsible for orchestrating the servers. Its only job is to send commands to the
servers and receive incremental stats updates. It can be executed from any machine that can talk to the servers.
The client part of hperf is responsible for orchestrating the servers. Its only job is to send commands to the servers and receive incremental stats updates. It can be executed from any machine that can talk to the servers.

### Servers
Servers are the machines we are testing. To launch the hperf command in servers mode, simply use the `server` command:
NOTE: `server` is the only command you can execute on the servers. All other commands are executed from the client.
Servers are the machines we are testing. To launch the hperf command in servers mode use the `server` command:

```bash
$ ./hperf server --help
```

This command will start an API and websocket on the given `--address` and save test results to `--storage-path`.

WARNING: do not expose `--address` to the internet
<b>NOTE: if the `--address` is not the same as your external IP addres used for communications between servers then you need to set `--real-ip`, otherwise the server will report internal IPs in the stats and it will run the test against itself, causing invalid results.</b>

### The listen command
Hperf can run tests without a specific `client` needing to be constantly connected. Once the `client` has started a test, the `client` can
Expand All @@ -57,13 +57,14 @@ go install github.com/minio/hperf/cmd/hperf@latest

### Server
Run server with default settings:
NOTE: this will place all test result files in the same directory.
NOTE: this will place all test result files in the same directory and use 0.0.0.0 as bind ip. We do not recommend this for larger tests.
```bash
$ ./hperf server
```
Run the server with custom `--address` and `--storage-path`

Run the server with custom `--address`, `--real-ip` and `--storage-path`
```bash
$ ./hperf server --address 10.10.2.10:5000 --storage-path /tmp/hperf/
$ ./hperf server --address 10.10.2.10:5000 --real-ip 150.150.20.2 --storage-path /tmp/hperf/
```

### Client
Expand All @@ -85,45 +86,48 @@ host per file line.
NOTE: Be careful not to re-use the ID's if you care about fetching results at a later date.

```bash
# get test results
./hperf stat --hosts 1.1.1.{1...100} --id [my_test_id]
# save test results
./hperf stat --hosts 1.1.1.{1...100} --id [my_test_id] --output /tmp/test.out

# analyze test results
./hperf analyze --file /tmp/test.out

# listen in on a running test
./hperf listen --hosts 1.1.1.{1...100} --id [my_test_id]

# stop a running test
./hperf stop --hosts 1.1.1.{1...100} --id [my_test_id]

# download test results
./hperf download --hosts 1.1.1.{1...100} --id [my_test_id] --file /tmp/test.out

# analyze test results
./hperf analyze --file /tmp/test.out
# analyze test results with full print output
./hperf analyze --file /tmp/test.out --print-stats --print-errors

# Generate a .csv file from a .json test file
./hperf csv --file /tmp/test.out
```

## Analysis
The analyze command will print statistics for the 10th and 90th percentiles and all datapoints in between.
The format used is:
The analyze command will print statistics for the 10th and 90th percentiles and all datapoints in between. Additionally, you can use the `--print-stats` and `--print-erros` flags for a more verbose output.

The analysis will show:
- 10th percentile: total, low, avarage, high
- in between: total, low, avarage, high
- 90th percentile: total, low, avarage, high

## Available Statistics
## Statistics
- Payload Roundtrip (RMS high/low):
- Payload transfer time (Microseconds)
- Time to first byte (TTFB high/low):
- This is the amount of time (Microseconds) it takes between a request being made and the first byte being requested by the receiver
- Transferred bytes (TX):
- Transferred bytes (TX high/low):
- Bandwidth throughput in KB/s, MB/s, GB/s, etc..
- Transferred bytes (TX total):
- Total transferred bytes (not per second)
- Request count (#TX):
- The number of HTTP/s requests made
- Error Count (#ERR):
- Number of encountered errors
- Dropped Packets (#Dropped):
- Total dropped packets on the server (total for all time)
- Memory (MemUsed):
- Total memory in use (total for all time)
- CPU (CPUUsed):
- Total memory in use (total for all time)
- Memory (Mem high/low/used):
- CPU (CPU high/low/used):

## Example: 20 second HTTP payload transfer test using multiple sockets
This test will use 12 concurrent workers to send http requests with a payload without any timeout between requests.
Expand All @@ -138,11 +142,30 @@ This will perform a 20 second bandwidth test with 12 concurrent HTTP streams:
$ ./hperf bandwidth --hosts file:./hosts --id http-test-2 --duration 20 --concurrency 12
```

## Example: 5 Minute latency test using a 2000 Byte buffer, with a delay of 50ms between requests
## Example: 5 Minute latency test using a 1000 Byte buffer, with a delay of 50ms between requests
This test will send a single round trip request between servers to test base latency and reachability:
```
$ ./hperf latency --hosts file:./hosts --id http-test-2 --duration 360 --concurrency 1 --requestDelay 50
--bufferSize 2000 --payloadSize 2000
--bufferSize 1000 --payloadSize 1000
```

# Full test scenario with analysis and csv export
## On the server
```bash
$ ./hperf server --address 10.10.2.10:5000 --real-ip 150.150.20.2 --storage-path /tmp/hperf/
```

## The client













Loading