Have you ever been in need to examine the current level of utilization of FC HBA I/O channel on Linux?
So, is your environment more or less following –
Your linux boxes are attached to some High End Storage infrastructure to which you have no access?
The above implies that the organization you’re in presents clear separation among the database/OS/Storage admins?
Last beer you had with a storage admin was much too long ago and you didn’t manage to establish that level of relationship..?
You have no root access on this Linux box (even if you had, that wouldn’t help much..)
If so, then this post may be for you.
To make it clear, this is post does not describe the iostat standard Linux tool which reaches out to the local disks.
Of course the word of disclaimer:
This script comes with no guarantee. Use it at your own risk.
Word of reassurance though, it reaches out to HBAs statistics in read only mode – it just runs “cat” on files.
Technically, it reports the throughput for reads (rd) and writes (wr) per pairs of HBAs, so e.g. host1 & host2 in 1 sec. cycle.
So it samples in the loop the stat files with the frequency of 1 sec. and reads the values of the tx and rx (transmit and receive) then subtracts the previous value from the currently read one (so, possibly incremented, if there was an activity on the interfaces).
Subsequently it reports the result at the end of each loop. The time gap between the two loops lasts 1sec.
I have written it years ago, so I don’t even report the kernel ver., nor the Linux flavor. Thus no guarantee that the paths, files, etc are present where I have assumed them to be.
If they got moved and you have found them read-only, suggest – reedit the script.
In this version it is assumed that you have read access to statistics populated in /sys/class/fc_host/statistics/ – so for example :
tx_frames
tx_words
and their rx_* counterparts.
The script does not handle all HBAs (I have rebuilt it from my another script which reports 2 [bonded] Ethernet interfaces, this is why it takes just pairs of HBAs as of now).
I will also make another post giving you the script reporting the Eth throughput, arguably more useful than this one (e.g. to examine level of Ethernet link utilization between the Primary and Standby database(s) when in SYNC mode).
As for this version, you may want to run it for e.g. host1 and host2 and then (or in parallel from other window) for host3 and host4, provided there are more HBAs active than just two per Linux node or simply: rewrite the script (no copy-rights :).
On top, do not look at the PktSize – it made sense with the Eth script, but this “HBA ver.” does not handle it properly – it reports just 0.0, although surely for FTS it should report 1MB (if db_file_multiblock_read_count = 128 and block size = 8KB).
It definitely reports properly the reard/write I/O throughput and this is basically what’s it for.
The script itself is at the end of this post.
How to run it (checks)
You need to check which HBAs are present e.g. like that :
oracle @ TESTDB1 @ testhost01 ]$ # cd /sys/class/fc_host/
[oracle @ TESTDB1 @ testhost01 ]$ # cd host1
[oracle @ TESTDB1 @ testhost01 ]$ #
[oracle @ TESTDB1 @ testhost01 ]$ #
[oracle @ TESTDB1 @ testhost01 ]$ # ls -al
total 0
-r--r--r-- 1 root root 4096 Oct 5 01:31 active_fc4s
-rw-r--r-- 1 root root 4096 Oct 5 01:31 dev_loss_tmo
lrwxrwxrwx 1 root root 0 Oct 5 01:31 device -> ../../../host1/
-r--r--r-- 1 root root 4096 Oct 5 01:31 fabric_name
--w------- 1 root root 4096 Oct 5 01:31 issue_lip
-r--r--r-- 1 root root 4096 Oct 5 01:31 max_npiv_vports
-r--r--r-- 1 root root 4096 Oct 5 01:31 maxframe_size
-r--r--r-- 1 root root 4096 Oct 5 01:31 node_name
-r--r--r-- 1 root root 4096 Oct 5 01:31 npiv_vports_inuse
-r--r--r-- 1 root root 4096 Oct 5 01:31 port_id
-r--r--r-- 1 root root 4096 Oct 5 01:31 port_name
-r--r--r-- 1 root root 4096 Oct 5 01:31 port_state
-r--r--r-- 1 root root 4096 Oct 5 01:31 port_type
drwxr-xr-x 2 root root 0 Oct 5 01:31 power/
-r--r--r-- 1 root root 4096 Oct 5 01:31 speed
drwxr-xr-x 2 root root 0 Oct 5 01:31 statistics/
lrwxrwxrwx 1 root root 0 Oct 5 01:31 subsystem -> ../../../../../../../../../../class/fc_host/
-r--r--r-- 1 root root 4096 Oct 5 01:31 supported_classes
-r--r--r-- 1 root root 4096 Oct 5 01:31 supported_fc4s
-r--r--r-- 1 root root 4096 Oct 5 01:31 supported_speeds
-r--r--r-- 1 root root 4096 Oct 5 01:31 symbolic_name
-rw-r--r-- 1 root root 4096 Oct 5 01:31 tgtid_bind_type
-rw-r--r-- 1 root root 4096 Oct 5 01:31 uevent
--w------- 1 root root 4096 Oct 5 01:31 vport_create
--w------- 1 root root 4096 Oct 5 01:31 vport_delete
[oracle @ TESTDB1 @ testhost01 ]$ #
[oracle @ TESTDB1 @ testhost01 ]$ # cat supported_speeds
1 Gbit, 10 Gbit
[oracle @ TESTDB1 @ testhost01 ]$ #
[oracle @ TESTDB1 @ testhost01 ]$ # cat port_state
Online
Quick check whether the port is active (it receives any data). I have sampled the rx_frames (note, the rx_frames under host1 is clearly being populated as the hex values do change)
[oracle @ TESTDB1 @ testhost01 ]$ # cd /sys/class/fc_host/host1/statistics
[oracle @ TESTDB1 @ testhost01 ]$ # cat rx_frames
0x15b03ac2
[oracle @ TESTDB1 @ testhost01 ]$ # cat rx_frames
0x15b03bfe
[oracle @ TESTDB1 @ testhost01 ]$ # cat rx_frames
0x15b03d0b
Example of execution with the test scenario :
- Produce a decent load on your database. Ideally, run any heavy SQL query in parallel with full table scan on a large segment.
Make sure the SQL does not read the buffers from SGA.
If you didn’t choose to run it in parallel, suggest to :
SQL> alter session “_serial_direct_read” = always;
SQL> <run your SQL> - While (or shortly before) running the SQL execute the script providing the HBAs of which you know (you checked them) are active, e.g. :
[oracle @ TESTDB1 @ testhost01 ]$ # ./fcstat.sh host1 host2
Ctrl+c: STOP and Summary
1 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
1 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
2 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
2 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
3 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
3 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
4 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
4 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
5 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
5 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
6 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
6 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
7 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
7 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
8 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
8 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
9 secs wr_host1 MB: 0 rd_host1 MB: 0 Total MB: 0 PktSize[MB]: 0.0
9 secs wr_host2 MB: 0 rd_host2 MB: 0 Total MB: 0 PktSize[MB]: 0.0
--------------
10 secs wr_host1 MB: 0 rd_host1 MB: 44 Total MB: 44 PktSize[MB]: 0.0
10 secs wr_host2 MB: 0 rd_host2 MB: 48 Total MB: 48 PktSize[MB]: 0.0 -> this is where I started my SQL
--------------
11 secs wr_host1 MB: 0 rd_host1 MB: 174 Total MB: 174 PktSize[MB]: 0.0
11 secs wr_host2 MB: 0 rd_host2 MB: 171 Total MB: 171 PktSize[MB]: 0.0
--------------
12 secs wr_host1 MB: 0 rd_host1 MB: 135 Total MB: 135 PktSize[MB]: 0.0
12 secs wr_host2 MB: 0 rd_host2 MB: 139 Total MB: 139 PktSize[MB]: 0.0
--------------
13 secs wr_host1 MB: 0 rd_host1 MB: 137 Total MB: 137 PktSize[MB]: 0.0
13 secs wr_host2 MB: 0 rd_host2 MB: 134 Total MB: 134 PktSize[MB]: 0.0
--------------
14 secs wr_host1 MB: 1 rd_host1 MB: 207 Total MB: 208 PktSize[MB]: 0.0
14 secs wr_host2 MB: 1 rd_host2 MB: 209 Total MB: 210 PktSize[MB]: 0.0
--------------
15 secs wr_host1 MB: 1 rd_host1 MB: 364 Total MB: 365 PktSize[MB]: 0.0
15 secs wr_host2 MB: 1 rd_host2 MB: 365 Total MB: 366 PktSize[MB]: 0.0
--------------
16 secs wr_host1 MB: 1 rd_host1 MB: 367 Total MB: 368 PktSize[MB]: 0.0
16 secs wr_host2 MB: 1 rd_host2 MB: 369 Total MB: 370 PktSize[MB]: 0.0
--------------
17 secs wr_host1 MB: 1 rd_host1 MB: 397 Total MB: 398 PktSize[MB]: 0.0
17 secs wr_host2 MB: 1 rd_host2 MB: 397 Total MB: 398 PktSize[MB]: 0.0
--------------
18 secs wr_host1 MB: 1 rd_host1 MB: 399 Total MB: 400 PktSize[MB]: 0.0
18 secs wr_host2 MB: 1 rd_host2 MB: 406 Total MB: 407 PktSize[MB]: 0.0
--------------
19 secs wr_host1 MB: 1 rd_host1 MB: 396 Total MB: 397 PktSize[MB]: 0.0
19 secs wr_host2 MB: 1 rd_host2 MB: 391 Total MB: 392 PktSize[MB]: 0.0
--------------
20 secs wr_host1 MB: 1 rd_host1 MB: 265 Total MB: 266 PktSize[MB]: 0.0
20 secs wr_host2 MB: 1 rd_host2 MB: 260 Total MB: 261 PktSize[MB]: 0.0
--------------
21 secs wr_host1 MB: 1 rd_host1 MB: 335 Total MB: 336 PktSize[MB]: 0.0
21 secs wr_host2 MB: 1 rd_host2 MB: 335 Total MB: 336 PktSize[MB]: 0.0
--------------
22 secs wr_host1 MB: 1 rd_host1 MB: 285 Total MB: 286 PktSize[MB]: 0.0
22 secs wr_host2 MB: 1 rd_host2 MB: 288 Total MB: 289 PktSize[MB]: 0.0
--------------
23 secs wr_host1 MB: 1 rd_host1 MB: 421 Total MB: 422 PktSize[MB]: 0.0
23 secs wr_host2 MB: 1 rd_host2 MB: 423 Total MB: 424 PktSize[MB]: 0.0
--------------
24 secs wr_host1 MB: 2 rd_host1 MB: 507 Total MB: 509 PktSize[MB]: 0.0
24 secs wr_host2 MB: 2 rd_host2 MB: 501 Total MB: 503 PktSize[MB]: 0.0
--------------
25 secs wr_host1 MB: 2 rd_host1 MB: 540 Total MB: 542 PktSize[MB]: 0.0
25 secs wr_host2 MB: 2 rd_host2 MB: 542 Total MB: 544 PktSize[MB]: 0.0 -> This is where the I/O maxed-out
--------------
26 secs wr_host1 MB: 2 rd_host1 MB: 459 Total MB: 461 PktSize[MB]: 0.0
26 secs wr_host2 MB: 2 rd_host2 MB: 464 Total MB: 466 PktSize[MB]: 0.0
--------------
27 secs wr_host1 MB: 2 rd_host1 MB: 537 Total MB: 539 PktSize[MB]: 0.0
27 secs wr_host2 MB: 2 rd_host2 MB: 535 Total MB: 537 PktSize[MB]: 0.0
--------------
28 secs wr_host1 MB: 3 rd_host1 MB: 446 Total MB: 449 PktSize[MB]: 0.0
28 secs wr_host2 MB: 3 rd_host2 MB: 438 Total MB: 441 PktSize[MB]: 0.0
--------------
29 secs wr_host1 MB: 1 rd_host1 MB: 349 Total MB: 350 PktSize[MB]: 0.0
29 secs wr_host2 MB: 1 rd_host2 MB: 351 Total MB: 352 PktSize[MB]: 0.0
--------------
30 secs wr_host1 MB: 1 rd_host1 MB: 336 Total MB: 337 PktSize[MB]: 0.0
30 secs wr_host2 MB: 1 rd_host2 MB: 335 Total MB: 336 PktSize[MB]: 0.0
--------------
31 secs wr_host1 MB: 1 rd_host1 MB: 355 Total MB: 356 PktSize[MB]: 0.0
31 secs wr_host2 MB: 1 rd_host2 MB: 358 Total MB: 359 PktSize[MB]: 0.0
--------------
^C <-- stopped it here.
********** Summary :
------ AVGs --------
Avg writeIO_host1 : 0 [MB]/sec
Avg readIO_host1 : 120 [MB]/sec
Avg writeIO_host2 : 0 [MB]/sec
Avg readIO_host2 : 120 [MB]/sec
------- MAXs -------
MAX writeIO_host1 : 3 [MB]/sec
MAX readIO_host1 : 540 [MB]/sec
MAX writeIO_host2 : 3 [MB]/sec
MAX readIO_host2 : 358 [MB]/sec
********** End of Summary
How to understand the output ?
In my test environment I had two HBAs connected and can see the reported throughput from both at roughly the same level.
The total max. throughput noted: ~1GByte/sec. – see the 25th sec.
Note, there was just tx populated, since I was exclusively performing read operations (I have executed SELECT statement).
Worth to mention, it was SELECT statement without any TEMP tablespace access. Had there been any operation involving TEMP, so any hash-join, order by, any type of aggregations, I would surely notice “rx” with non zero data.
So, under “wr_hostx” I would see some values greater than zero during the time my TEMP tablespace has been written to (so, e.g. for hash-join, while it would be building hash table).
The total throughout must be calculated as a sum of both, data from host1 and from host2.
The “Total MB:” in turn sums up the tx and rx (so, for the database specialist: the sum of read and write operations).
Important: I was alone on that Linux box. This is why you can see the first lines reporting zeros for wr_host1 MB: 0 and rd_host1 MB: 0.
In the live and active environment you’ll obviously notice at least some “noise” and perhaps some occasional “spikes”.
All in all, that’s what this script is for – it is to report the utilization of the FC HBA I/O channels across the entire Linux system, not a single database
So, with this script wrapped and left over running for longer time (with tee’ing to a text file) it may be helpful with quite accurate capacity planning.
If executed over ssh, it can be used to measure the utilization of the entire I/O channel on a RAC system.
The script itself :
#!/bin/bash
#####
#
# fcstat.sh
# Example of exec: ./fcstat.sh host1 host2
# Piotr Sajda (PS5)
# Script reports rx/tx rate [MB/sec] for two selected HBA interfaces
# It uses OS statistics accessible /sys/class/fc_host/<hostx>/statistics
# Run ./fcstat.sh <HBA1> <HBA2>, e.g. ./fcstat.sh host1 host2.
# Stop: Ctrl+C
#
####
FC1=$1
FC2=$2
maxframe_size1=$(cat /sys/class/fc_host/${FC1}/maxframe_size | awk '{print $1}')
maxframe_size2=$(cat /sys/class/fc_host/${FC2}/maxframe_size | awk '{print $1}')
if [ "x${1}" = "x" ]; then
printf "\n"
echo "Parameters missing !"
echo "Give parameters which are the two HBA interfaces e.g: ./fcstat.sh host1 host2"
echo "How to find them? ls -l /sys/class/fc_host/ "
printf "\n"
exit 0;
fi
function ctrl_c() {
echo -e "\n ********** Summary : \n"
let myavg_tx_FC1=($curr_value_tx_FC1 - $first_value_tx_FC1)/$mycounter/1024
let myavg_rx_FC1=($curr_value_rx_FC1 - $first_value_rx_FC1)/$mycounter/1024
let myavg_tx_FC2=($curr_value_tx_FC2 - $first_value_tx_FC2)/$mycounter/1024
let myavg_rx_FC2=($curr_value_rx_FC2 - $first_value_rx_FC2)/$mycounter/1024
echo -e " ------ AVGs -------- "
echo "Avg writeIO_${FC1} : $myavg_tx_FC1 [MB]/sec"
echo "Avg readIO_${FC1} : $myavg_rx_FC1 [MB]/sec"
echo "Avg writeIO_${FC2} : $myavg_tx_FC2 [MB]/sec"
echo "Avg readIO_${FC2} : $myavg_rx_FC2 [MB]/sec"
echo -e " "
echo -e " ------- MAXs ------- "
echo "MAX writeIO_${FC1} : $max_valueMB_tx_FC1 [MB]/sec"
echo "MAX readIO_${FC1} : $max_valueMB_rx_FC1 [MB]/sec"
echo "MAX writeIO_${FC2} : $max_valueMB_tx_FC2 [MB]/sec"
echo "MAX readIO_${FC2} : $max_valueMB_rx_FC2 [MB]/sec"
echo -e "\n ********** End of Summary \n"
exit 0
}
let mycounter=0
while true; do
if [ $mycounter -eq 0 ]; then
mycounter=`expr $mycounter + 1`
prev_value_tx_FC1=$((16#`cat /sys/class/fc_host/${FC1}/statistics/tx_frames | awk -F "x" '{print $2}'`))
prev_value_rx_FC1=$((16#`cat /sys/class/fc_host/${FC1}/statistics/rx_frames | awk -F "x" '{print $2}'`))
prev_value_packets_tx_FC1=$((16#`cat /sys/class/fc_host/${FC1}/statistics/tx_words | awk -F "x" '{print $2}'`))
prev_value_packets_rx_FC1=$((16#`cat /sys/class/fc_host/${FC1}/statistics/rx_words | awk -F "x" '{print $2}'`))
prev_value_tx_FC2=$((16#`cat /sys/class/fc_host/${FC2}/statistics/tx_frames | awk -F "x" '{print $2}'`))
prev_value_rx_FC2=$((16#`cat /sys/class/fc_host/${FC2}/statistics/rx_frames | awk -F "x" '{print $2}'`))
prev_value_packets_tx_FC2=$((16#`cat /sys/class/fc_host/${FC2}/statistics/tx_words | awk -F "x" '{print $2}'`))
prev_value_packets_rx_FC2=$((16#`cat /sys/class/fc_host/${FC2}/statistics/rx_words | awk -F "x" '{print $2}'`))
first_value_tx_FC1=$prev_value_tx_FC1
first_value_rx_FC1=$prev_value_rx_FC1
first_value_packets_tx_FC1=$prev_value_packets_tx_FC1
first_value_packets_rx_FC1=$prev_value_packets_rx_FC1
first_value_tx_FC2=$prev_value_tx_FC2
first_value_rx_FC2=$prev_value_rx_FC2
first_value_packets_tx_FC2=$prev_value_packets_tx_FC2
first_value_packets_rx_FC2=$prev_value_packets_rx_FC2
max_valueMB_tx_FC1=0
max_valueMB_rx_FC1=0
max_valueMB_tx_FC2=0
max_valueMB_rx_FC2=0
echo -e "\n Ctrl+c: STOP and Summary \n"
sleep 1
else
curr_value_tx_FC1=$((16#`cat /sys/class/fc_host/${FC1}/statistics/tx_frames | awk -F "x" '{print $2}'`))
curr_value_rx_FC1=$((16#`cat /sys/class/fc_host/${FC1}/statistics/rx_frames | awk -F "x" '{print $2}'`))
curr_value_packets_tx_FC1=$((16#`cat /sys/class/fc_host/${FC1}/statistics/tx_words | awk -F "x" '{print $2}'`))
curr_value_packets_rx_FC1=$((16#`cat /sys/class/fc_host/${FC1}/statistics/rx_words | awk -F "x" '{print $2}'`))
curr_value_tx_FC2=$((16#`cat /sys/class/fc_host/${FC2}/statistics/tx_frames | awk -F "x" '{print $2}'`))
curr_value_rx_FC2=$((16#`cat /sys/class/fc_host/${FC2}/statistics/rx_frames | awk -F "x" '{print $2}'`))
curr_value_packets_tx_FC2=$((16#`cat /sys/class/fc_host/${FC2}/statistics/tx_words | awk -F "x" '{print $2}'`))
curr_value_packets_rx_FC2=$((16#`cat /sys/class/fc_host/${FC2}/statistics/rx_words | awk -F "x" '{print $2}'`))
let myincreaseMB_tx_FC1=($curr_value_tx_FC1 - $prev_value_tx_FC1)*${maxframe_size1}/1024/1024
let myincreaseMB_rx_FC1=($curr_value_rx_FC1 - $prev_value_rx_FC1)*${maxframe_size1}/1024/1024
let mytotal_increaseMB_FC1=($myincreaseMB_tx_FC1 + $myincreaseMB_rx_FC1)
let myincrease_packets_tx_FC1=($curr_value_packets_tx_FC1 - $prev_value_packets_tx_FC1)
let myincrease_packets_rx_FC1=($curr_value_packets_rx_FC1 - $prev_value_packets_rx_FC1)
let mytotal_packets_increase_FC1=($myincrease_packets_tx_FC1 + $myincrease_packets_rx_FC1)
let myincreaseMB_tx_FC2=($curr_value_tx_FC2 - $prev_value_tx_FC2)*${maxframe_size2}/1024/1024
let myincreaseMB_rx_FC2=($curr_value_rx_FC2 - $prev_value_rx_FC2)*${maxframe_size2}/1024/1024
let mytotal_increaseMB_FC2=($myincreaseMB_tx_FC2 + $myincreaseMB_rx_FC2)
let myincrease_packets_tx_FC2=($curr_value_packets_tx_FC2 - $prev_value_packets_tx_FC2)
let myincrease_packets_rx_FC2=($curr_value_packets_rx_FC2 - $prev_value_packets_rx_FC2)
let mytotal_packets_increase_FC2=($myincrease_packets_tx_FC2 + $myincrease_packets_rx_FC2)
prev_value_tx_FC1=$curr_value_tx_FC1
prev_value_rx_FC1=$curr_value_rx_FC1
prev_value_packets_tx_FC1=$curr_value_packets_tx_FC1
prev_value_packets_rx_FC1=$curr_value_packets_rx_FC1
prev_value_tx_FC2=$curr_value_tx_FC2
prev_value_rx_FC2=$curr_value_rx_FC2
prev_value_packets_tx_FC2=$curr_value_packets_tx_FC2
prev_value_packets_rx_FC2=$curr_value_packets_rx_FC2
if [ $mytotal_packets_increase_FC1 -eq 0 ]; then mypacket_size_FC1=0
else
mypacket_size_FC1=$( echo $mytotal_increaseMB_FC1 $mytotal_packets_increase_FC1 | awk '{printf "%f", $1/$2}' )
fi
if [ $mytotal_packets_increase_FC2 -eq 0 ]; then mypacket_size_FC2=0
else
mypacket_size_FC2=$( echo $mytotal_increaseMB_FC2 $mytotal_packets_increase_FC2 | awk '{printf "%f", $1/$2}' )
fi
if [ $max_valueMB_tx_FC1 -lt $myincreaseMB_tx_FC1 ]; then
max_valueMB_tx_FC1=$myincreaseMB_tx_FC1
fi
if [ $max_valueMB_rx_FC1 -lt $myincreaseMB_rx_FC1 ]; then
max_valueMB_rx_FC1=$myincreaseMB_rx_FC1
fi
if [ $max_valueMB_tx_FC2 -lt $myincreaseMB_tx_FC2 ]; then
max_valueMB_tx_FC2=$myincreaseMB_tx_FC2
fi
if [ $max_valueMB_tx_FC2 -lt $myincreaseMB_rx_FC2 ]; then
max_valueMB_rx_FC2=$myincreaseMB_rx_FC2
fi
printf "%3d %5s %6s %6d %10s %6d %10s %6d %15s %4.1f\n" \
"$mycounter" "secs" \
"wr_${FC1} MB:" "$myincreaseMB_tx_FC1" \
"rd_${FC1} MB:" "$myincreaseMB_rx_FC1" \
"Total MB:" "$mytotal_increaseMB_FC1" \
"PktSize[MB]:" "$mypacket_size_FC1"
printf "%3d %5s %6s %6d %10s %6d %10s %6d %15s %4.1f\n" \
"$mycounter" "secs" \
"wr_${FC2} MB:" "$myincreaseMB_tx_FC2" \
"rd_${FC2} MB:" "$myincreaseMB_rx_FC2" \
"Total MB:" "$mytotal_increaseMB_FC2" \
"PktSize[MB]:" "$mypacket_size_FC2"
echo -e "--------------"
sleep 1
mycounter=`expr $mycounter + 1`
fi
trap ctrl_c INT
done
I have just noticed there must be a bug in calculating the max. values, precisely, the 2nd value is miscalculated. I will check that. Clearly, it should report 542 [MB]/sec. The rest seems fine.
------- MAXs -------
MAX writeIO_host1 : 3 [MB]/sec
MAX readIO_host1 : 540 [MB]/sec
MAX writeIO_host2 : 3 [MB]/sec
MAX readIO_host2 : 358 [MB]/sec --> wrong.
Leave a Reply