Home › Forums › FABRIC General Questions and Discussion › get_physical_os_interface()[‘ifname’] failed
- This topic has 9 replies, 3 voices, and was last updated 2 years, 4 months ago by Mina William Morcos.
-
AuthorPosts
-
July 14, 2022 at 12:00 pm #2282
I’m kind of new with the network adapter tuning. I want to try test the 100Gbps link between WAN and I found out a useful jupyter notebook in the testing_and_debugging folder: ‘test-wan-networks’. However, once I run the notebook I found that one function is deprecated: get_physical_os_interface(). Does this setting a necessory? Or we only need to set on os_interface?
July 14, 2022 at 1:52 pm #2283I think some of those debugging notebooks are old and maybe don’t work anymore.
You can use the 100G networks by just creating a WAN link that connects VMs using 100G NICs. Any of the regular networking notebooks should work for this. The only thing to think about is that, for now, dedicated quality of service guarantees are not available. However, very little bandwidth is currently being used and you should not be limited by other users.
That said, we have only begun testing most of the links and have not confirmed the bandwidth we can achieve. In theory, most of them should be able to get 100G but I suspect most of them will need some tuning. Please try this and let us know what you can achieve.
thanks,
Paul
- This reply was modified 2 years, 4 months ago by Paul Ruth.
July 15, 2022 at 9:29 am #2300According to the tuning instruction, for now, if the site is STAR and SALT (between which I assume is a 100 Gpbs link), and with Basic 100G NIC, I can achieve, with TCP:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-60.00 sec 86.8 GBytes 12.4 Gbits/sec 806071 sender
[ 5] 0.00-60.04 sec 86.8 GBytes 12.4 Gbits/sec receiverWhich I think it is not as ideal as I expect. And with UDP test, I can achieve:
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-60.00 sec 39.7 GBytes 5.68 Gbits/sec 0.000 ms 0/4746870 (0%) sender
[ 5] 0.00-60.03 sec 438 MBytes 61.3 Mbits/sec 0.003 ms 2483897/2535131 (98%) receiverWhich also not that much.
Is there any way I could increase the bandwidth? Thank you so much!
July 15, 2022 at 9:35 am #2303BTW…
Cores RAM Disk
8 32 100July 15, 2022 at 10:50 am #2322We are still working on tuning all the links and trying to figure out best practices for achieving very high bandwidths. There are no artificial limitations on that link and, in theory, 100G is possible. This is just going to require a bunch of tuning, both on the edge and in probably in the core.
I know some of our students where looking at this and achieved ~100G between pairs of sites that are closer to each other. I’m not sure what the current best bandwidth achieved is on the longer spans, but I remember seeing them getting at least 30G for some tests. We would be interested in knowing about any successes you have with achieving higher bandwidths.
What tuning did you perform in your nodes?
In general, there are a lot of variable that can prevent bandwidths at these rates. You might reduce some of those variable by starting with a pair of sites that a close to each other (maybe UTAH/SALT) and use dedicated connectX-6 cards.
Your UDP test has a 98% loss. Given that the card is a 100G card it can easily overwhelm an intermediary switch which can result in huge packet losses like that. You might try UDP test with lower bandwidths and slowly increase the bandwidth until you packet loss starts to grow. Then try different tuning parameters to see if you can get it higher.
I’m going to see if one of our student who was working on this can add an more here…
July 15, 2022 at 11:06 am #2323Thank you for your advices, Paul! I basically use the instructions here: https://srcc.stanford.edu/100g-network-adapter-tuning and here: https://fasterdata.es.net/host-tuning/linux/ for TCP, and https://fasterdata.es.net/host-tuning/linux/udp-tuning/ for UDP. Specifically:
——-TCP
/etc/sysctl.conf
net.core.rmem_max = 268435456
net.core.wmem_max = 268435456
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.ipv4.tcp_congestion_control=bbr
net.ipv4.tcp_mtu_probing=1
net.core.default_qdisc = fq
net.core.netdev_max_backlog = 250000
net.ipv4.tcp_no_metrics_save = 1$ ethtool -K <eth1> lro on
$ ifconfig <eth1> txqueuelen 20000
$ systemctl stop irqbalance———–UDP
$ iperf3 -s
$ iperf3 -l8972 -u -w4m -b0 -A 4,4 -c 192.168.1.1 -t 60I can try later for i) nearer nodes and ii) connectX-6 cards to explore better results.
And I will also follow up your advice on UDP tuning and try to find a good b/w.
Looking forward to useful examples on fully usage of the Fabric network link capacities.
July 15, 2022 at 11:59 am #2330Those are all great resources.
mtu = 9000 (jumbo frames) is important too.
With 100G NICs this part from fasterdata.es.net is important too:
We also strongly recommend reducing the maximum flow rate to avoid bursts of packets that could overflow switch and receive host buffers. For example for a 10G host, add this to a boot script: /sbin/tc qdisc add dev ethN root fq maxrate 8gbit For for a host running data transfer tools that use 4 parallel streams, do this: /sbin/tc qdisc add dev ethN root fq maxrate 2gbit Where 'ethN' is the name of the ethernet device on your system.
July 19, 2022 at 10:39 am #2379Hi Paul,
After a few test, I still cannot reach 30G for the tests. (I can only get at most 20 between Utah and Salt with ConnectX6 NIC) Would you like to share your settings so that I could duplicate and see the results? Some settings e.g., how may cores/RAM you use for test, how many parallel tasks you start for the iperf3 test, between which 2 sites you process your test, and use which NIC you test and how the network is set?
Thank you so much for your help..
July 20, 2022 at 9:36 am #2385Hello.
I was able to get 70Gbps on a same-site experiment (first notebook attached), and 19Gbps on different-site experiments (second and third notebooks attached). The fourth notebook just has a simple UI bit of code that might be helpful for you to modify the parameters and plot the results on the graph.
Below are the tuning parameters that I use.
net.core.rmem_max = 2147483647 net.core.wmem_max = 2147483647 net.ipv4.tcp_rmem = 4096 87380 2147483647 net.ipv4.tcp_wmem = 4096 65536 2147483647 net.ipv4.tcp_congestion_control = htcp net.ipv4.tcp_mtu_probing = 1 net.core.default_qdisc = fq
I use 32 parallel streams, and the largest window size I can get, like so:
-P 32 -w 999M
. I use iperf, not iperf3. I tried to get the 70Gbps result on iperf3 before, but no luck.July 20, 2022 at 9:57 am #2386It seems that there’s an issue with file upload on the forum, so here are the files: https://github.com/842Mono/Bandwidth_Test_Notebooks .
-
AuthorPosts
- You must be logged in to reply to this topic.