Home › Forums › FABRIC General Questions and Discussion › Interconnection Details Between Hosts at the Same Site
- This topic has 3 replies, 3 voices, and was last updated 2 days, 1 hour ago by
Tom Lehman.
-
AuthorPosts
-
June 25, 2025 at 5:30 pm #8654
Dear FABRIC team,
I have two nodes (sender and receiver) running on separate hosts within the same site. In some of my high-capacity experiments (a few Gbps), I am observing packet losses, and after some analysis, I suspect that these losses may be due to switches or other infrastructure between the sender and receiver.
Could you please provide some information about how the hosts are interconnected within each site? Specifically:
-
What switch model is used between hosts?
-
What is the port rate?
-
What are the buffer sizes per output port, and per queue if multiple queues are supported?
-
Is it possible to know the input buffer size before packets enter the processing pipeline?
Thank you for your help and assistance.
Kind regards,
Fatih Berkay Sarpkaya
June 26, 2025 at 3:24 pm #8659Hi Fatih,
Could you please share your Slice ID and also what kind of NICs are you using in your slice?
Thanks,
Komal
July 1, 2025 at 8:51 am #8661Hi Komal,
Sorry for the late reply. My slice ID is “a3fbba6f-7a9a-40af-a16e-cd23e1a78b04”.
I am using SmartNICs (ConnectX-5, and ConnectX-6).
Thank you.
Best regards,
Fatih Berkay Sarpkaya
July 1, 2025 at 12:00 pm #8662Hi Fatih,
I see you have a Layer 2 Bridge between two VMs, on different worker nodes, on site KANS. One one VM you have a 100G NIC on the other is a 25G NIC. Are not able to get around 25G between the two VMS?
The only network element in between your VMs is a Cisco NCS5700, which should allow running at line rate.
It should work better if you send from the 25G nic into the 100G nic.
Have you tuned the VMs. I typically run a tuning script like this:
#!/bin/bash
# Linux host tuning from https://fasterdata.es.net/host-tuning/linux/
cat >> /etc/sysctl.conf <<EOL
# allow testing with buffers up to 128MB
net.core.rmem_max = 536870912
net.core.wmem_max = 536870912
# increase Linux autotuning TCP buffer limit to 64MB
net.ipv4.tcp_rmem = 4096 87380 536870912
net.ipv4.tcp_wmem = 4096 65536 536870912
# recommended default congestion control is htcp or bbr
net.ipv4.tcp_congestion_control=bbr
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1
# recommended to enable ‘fair queueing’
net.core.default_qdisc = fq
#net.core.default_qdisc = fq_codel
EOLsysctl –system
# Turn on jumbo frames
for dev inbasename -a /sys/class/net/*
; do
ip link set dev $dev mtu 9000
doneTom
-
-
AuthorPosts
- You must be logged in to reply to this topic.