Forum Replies Created
-
AuthorPosts
-
I think the out if sync issue should be cleared, please try again and let us know if all look ok. Thanks, Tom
December 23, 2025 at 12:13 pm in reply to: FABNetv4 Connectivity Stopped Working Between Storage VM and Sites #9316The FABRIC MICH dataplane link is back in service. FABNetv4 services are working now.
December 23, 2025 at 11:27 am in reply to: FABNetv4 Connectivity Stopped Working Between Storage VM and Sites #9311There was some maintenance last night on the underlying optical infrastructure. I will check on status are report back here.
Tom
December 12, 2025 at 8:38 am in reply to: FABNetv4 Connectivity Stopped Working Between Storage VM and Multi-Sites #9259There was some fiber maintenance last night, which may have caused this outage. It looks like the FABNetv4 service to FABRIC MICH is working again. Please test and verify. Thanks, Tom
The FABRIC RUTG Dataplane connection is back in service. This topic will be closed.
Hello, I think you will want to use FABlib to create a Layer2 network to connect your VM interfaces. And then you should be able to use IP Multicast between VMs on that common Layer2 broadcast domain. I have not tested this, so let us know if that works for you. Tom
The dataplane connection for the FABRIC TACC site is currently down.
This only impacts slices which are using FABNet services that go over the dataplane connection to other FABRIC sites.
Access to slice Virtual Machines, and FABRIC services within this FABRIC site, should work normally.
Updates will be provided on this forum post.
The FABRIC FIU dataplane link is back in service. This topic will be closed.
Regional Fiber break was restored, and FABRIC MICH dataplane link is back in service.
The FABRIC MICH dataplane link is back in service. This topic will be closed.
Hi Fatih,
I see you have a Layer 2 Bridge between two VMs, on different worker nodes, on site KANS. One one VM you have a 100G NIC on the other is a 25G NIC. Are not able to get around 25G between the two VMS?
The only network element in between your VMs is a Cisco NCS5700, which should allow running at line rate.
It should work better if you send from the 25G nic into the 100G nic.
Have you tuned the VMs. I typically run a tuning script like this:
#!/bin/bash
# Linux host tuning from https://fasterdata.es.net/host-tuning/linux/
cat >> /etc/sysctl.conf <<EOL
# allow testing with buffers up to 128MB
net.core.rmem_max = 536870912
net.core.wmem_max = 536870912
# increase Linux autotuning TCP buffer limit to 64MB
net.ipv4.tcp_rmem = 4096 87380 536870912
net.ipv4.tcp_wmem = 4096 65536 536870912
# recommended default congestion control is htcp or bbr
net.ipv4.tcp_congestion_control=bbr
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1
# recommended to enable ‘fair queueing’
net.core.default_qdisc = fq
#net.core.default_qdisc = fq_codel
EOLsysctl –system
# Turn on jumbo frames
for dev inbasename -a /sys/class/net/*; do
ip link set dev $dev mtu 9000
doneTom
The FABRIC INDI dataplane is back in service. FABNet services should be working again.
FABRIC INDI dataplane is back in service. This topic will be closed.
The dataplane connection for the FABRIC INDI site is currently down.
This only impacts slices which are using FABNet services that go over the dataplane connection to other FABRIC sites.
Access to slice Virtual Machines, and FABRIC services within this FABRIC site, should work normally.
The Indiana GigaPoP Network Operations Center is reporting a network outage for the fiber path from Indiana University to StarLight.
Updates will be provided to the FABRIC Announcements Forum.
The FABRIC MASS dataplane is back in service, this topic will be closed.
-
AuthorPosts