Forum Replies Created
-
AuthorPosts
-
Hi Fatih,
I see you have a Layer 2 Bridge between two VMs, on different worker nodes, on site KANS. One one VM you have a 100G NIC on the other is a 25G NIC. Are not able to get around 25G between the two VMS?
The only network element in between your VMs is a Cisco NCS5700, which should allow running at line rate.
It should work better if you send from the 25G nic into the 100G nic.
Have you tuned the VMs. I typically run a tuning script like this:
#!/bin/bash
# Linux host tuning from https://fasterdata.es.net/host-tuning/linux/
cat >> /etc/sysctl.conf <<EOL
# allow testing with buffers up to 128MB
net.core.rmem_max = 536870912
net.core.wmem_max = 536870912
# increase Linux autotuning TCP buffer limit to 64MB
net.ipv4.tcp_rmem = 4096 87380 536870912
net.ipv4.tcp_wmem = 4096 65536 536870912
# recommended default congestion control is htcp or bbr
net.ipv4.tcp_congestion_control=bbr
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1
# recommended to enable ‘fair queueing’
net.core.default_qdisc = fq
#net.core.default_qdisc = fq_codel
EOLsysctl –system
# Turn on jumbo frames
for dev inbasename -a /sys/class/net/*
; do
ip link set dev $dev mtu 9000
doneTom
The FABRIC INDI dataplane is back in service. FABNet services should be working again.
FABRIC INDI dataplane is back in service. This topic will be closed.
The dataplane connection for the FABRIC INDI site is currently down.
This only impacts slices which are using FABNet services that go over the dataplane connection to other FABRIC sites.
Access to slice Virtual Machines, and FABRIC services within this FABRIC site, should work normally.
The Indiana GigaPoP Network Operations Center is reporting a network outage for the fiber path from Indiana University to StarLight.
Updates will be provided to the FABRIC Announcements Forum.
The FABRIC MASS dataplane is back in service, this topic will be closed.
The FABRIC FIU dataplane is back in service. This topic will be closed.
April 9, 2025 at 9:52 pm in reply to: FABRIC SALT Dataplane Switch/Router Maintenance – April 9, 8pm ET #8425The FABRIC SALT dataplane switch/router maintenance is complete. This topic will be closed.
The FABRIC INDI dataplane is back in service. This topic will be closed.
The FABRIC INDI dataplane connection is down due to a fiber cut. Updates will be provided on the dedicated posting “FABRIC INDI Dataplane is Down” in the FABRIC Announcements forum
The dataplane connection to FABRIC TACC is back in service. This topic will be closed.
The FABRIC INDI dataplane link maintenance is complete, and link is back in service. This topic will be closed.
February 5, 2025 at 12:52 pm in reply to: FABRIC GPN dataplane scheduled downtime for maintenance #8184The GPN dataplane is back in service, this topic will be closed.
Closing this topic, because the title includes two site, and only the FABRIC TACC dataplane link is still down. A new topic will be opened for the FABRIC TACC dataplane.
I think the main consideration is that HC is based on 64 bit counters, and the non HC is based on 32 bit counters. So for a high speed interface a 32 bit counter may roll over quite often. Using a 64 bit counter is more convenient in terms of not having to monitor for counter roll overs. The data in terms of packet counts should be equivalent.
The fiber has been repaired in Los Angeles, and the FABRIC UCSD dataplane link is now back in service.
This topic will be closed.
-
AuthorPosts