Forum Replies Created
-
AuthorPosts
-
The FABRIC RUTG Dataplane connection is back in service. This topic will be closed.
Hello, I think you will want to use FABlib to create a Layer2 network to connect your VM interfaces. And then you should be able to use IP Multicast between VMs on that common Layer2 broadcast domain. I have not tested this, so let us know if that works for you. Tom
The dataplane connection for the FABRIC TACC site is currently down.
This only impacts slices which are using FABNet services that go over the dataplane connection to other FABRIC sites.
Access to slice Virtual Machines, and FABRIC services within this FABRIC site, should work normally.
Updates will be provided on this forum post.
The FABRIC FIU dataplane link is back in service. This topic will be closed.
Regional Fiber break was restored, and FABRIC MICH dataplane link is back in service.
The FABRIC MICH dataplane link is back in service. This topic will be closed.
Hi Fatih,
I see you have a Layer 2 Bridge between two VMs, on different worker nodes, on site KANS. One one VM you have a 100G NIC on the other is a 25G NIC. Are not able to get around 25G between the two VMS?
The only network element in between your VMs is a Cisco NCS5700, which should allow running at line rate.
It should work better if you send from the 25G nic into the 100G nic.
Have you tuned the VMs. I typically run a tuning script like this:
#!/bin/bash
# Linux host tuning from https://fasterdata.es.net/host-tuning/linux/
cat >> /etc/sysctl.conf <<EOL
# allow testing with buffers up to 128MB
net.core.rmem_max = 536870912
net.core.wmem_max = 536870912
# increase Linux autotuning TCP buffer limit to 64MB
net.ipv4.tcp_rmem = 4096 87380 536870912
net.ipv4.tcp_wmem = 4096 65536 536870912
# recommended default congestion control is htcp or bbr
net.ipv4.tcp_congestion_control=bbr
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1
# recommended to enable ‘fair queueing’
net.core.default_qdisc = fq
#net.core.default_qdisc = fq_codel
EOLsysctl –system
# Turn on jumbo frames
for dev inbasename -a /sys/class/net/*
; do
ip link set dev $dev mtu 9000
doneTom
The FABRIC INDI dataplane is back in service. FABNet services should be working again.
FABRIC INDI dataplane is back in service. This topic will be closed.
The dataplane connection for the FABRIC INDI site is currently down.
This only impacts slices which are using FABNet services that go over the dataplane connection to other FABRIC sites.
Access to slice Virtual Machines, and FABRIC services within this FABRIC site, should work normally.
The Indiana GigaPoP Network Operations Center is reporting a network outage for the fiber path from Indiana University to StarLight.
Updates will be provided to the FABRIC Announcements Forum.
The FABRIC MASS dataplane is back in service, this topic will be closed.
The FABRIC FIU dataplane is back in service. This topic will be closed.
April 9, 2025 at 9:52 pm in reply to: FABRIC SALT Dataplane Switch/Router Maintenance – April 9, 8pm ET #8425The FABRIC SALT dataplane switch/router maintenance is complete. This topic will be closed.
The FABRIC INDI dataplane is back in service. This topic will be closed.
The FABRIC INDI dataplane connection is down due to a fiber cut. Updates will be provided on the dedicated posting “FABRIC INDI Dataplane is Down” in the FABRIC Announcements forum
-
AuthorPosts