1. Tom Lehman

Tom Lehman

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 35 total)
  • Author
    Posts
  • in reply to: Fabric/Cloudlab facility port unresponsive #9328
    Tom Lehman
    Participant

      I think the out if sync issue should be cleared, please try  again and let us know if all look ok.  Thanks, Tom

      Tom Lehman
      Participant

        The FABRIC MICH dataplane link is back in service.  FABNetv4 services are working now.

        Tom Lehman
        Participant

          There was some maintenance last night on the underlying optical infrastructure.  I will check on status are report back here.

          Tom

          Tom Lehman
          Participant

            There was some fiber maintenance last night, which may have caused this outage.  It looks like the FABNetv4 service to FABRIC MICH is working again.  Please test and verify.  Thanks, Tom

            in reply to: FABRIC RUTG dataplane connection is currently down #9088
            Tom Lehman
            Participant

              The FABRIC RUTG Dataplane connection is back in service.  This topic will be closed.

              in reply to: Using multicast in FABRIC #9065
              Tom Lehman
              Participant

                Hello, I think you will want to use FABlib to create a Layer2 network  to connect your VM interfaces. And then you should be able to use IP Multicast between VMs on that common Layer2 broadcast domain.   I have not tested this, so let us know if that works for you.  Tom

                in reply to: FABRIC TACC – Power Outage #9057
                Tom Lehman
                Participant

                  The dataplane connection for the FABRIC TACC site is currently down.

                  This only impacts slices which are using FABNet services that go over the dataplane connection to other FABRIC sites.

                  Access to slice Virtual Machines, and FABRIC services within this FABRIC site, should work normally.

                  Updates will be provided on this forum post.

                  in reply to: FABRIC FIU Dataplane connection is down #8917
                  Tom Lehman
                  Participant

                    The FABRIC FIU dataplane link is back in service.  This topic will be closed.

                    in reply to: FABRIC MICH dataplane connection is currently down #8705
                    Tom Lehman
                    Participant

                      Regional Fiber break was restored, and FABRIC MICH dataplane link is back in service.

                      in reply to: FABRIC MICH dataplane connection is currently down #8704
                      Tom Lehman
                      Participant

                        The FABRIC MICH dataplane link is back in service. This topic will be closed.

                        in reply to: Interconnection Details Between Hosts at the Same Site #8662
                        Tom Lehman
                        Participant

                          Hi Fatih,

                          I see you have a Layer 2 Bridge between two VMs, on different worker nodes, on site KANS.  One one VM you have a 100G NIC on the other is a 25G NIC.   Are not able to get around 25G between the two VMS?

                          The only network element in between your VMs is a Cisco NCS5700, which should allow running at line rate.

                          It should work better if you send from the 25G nic into the 100G nic.

                          Have you tuned the VMs.  I typically run a tuning script like this:

                          #!/bin/bash

                          # Linux host tuning from https://fasterdata.es.net/host-tuning/linux/
                          cat >> /etc/sysctl.conf <<EOL
                          # allow testing with buffers up to 128MB
                          net.core.rmem_max = 536870912
                          net.core.wmem_max = 536870912
                          # increase Linux autotuning TCP buffer limit to 64MB
                          net.ipv4.tcp_rmem = 4096 87380 536870912
                          net.ipv4.tcp_wmem = 4096 65536 536870912
                          # recommended default congestion control is htcp or bbr
                          net.ipv4.tcp_congestion_control=bbr
                          # recommended for hosts with jumbo frames enabled
                          net.ipv4.tcp_mtu_probing=1
                          # recommended to enable ‘fair queueing’
                          net.core.default_qdisc = fq
                          #net.core.default_qdisc = fq_codel
                          EOL

                          sysctl –system

                          # Turn on jumbo frames
                          for dev in basename -a /sys/class/net/*; do
                          ip link set dev $dev mtu 9000
                          done

                          Tom

                          in reply to: INDI site, can’t route IPv4 (again) #8645
                          Tom Lehman
                          Participant

                            The FABRIC INDI dataplane is back in service.  FABNet services should be working again.

                            in reply to: FABRIC INDI dataplane connection currently down #8644
                            Tom Lehman
                            Participant

                              FABRIC INDI  dataplane is back in service.  This topic will be closed.

                              in reply to: INDI site, can’t route IPv4 (again) #8637
                              Tom Lehman
                              Participant

                                The dataplane connection for the FABRIC INDI site is currently down.

                                This only impacts slices which are using FABNet services that go over the dataplane connection to other FABRIC sites.

                                Access to slice Virtual Machines, and FABRIC services within this FABRIC site, should work normally.

                                The Indiana GigaPoP Network Operations Center is reporting a network outage for the fiber path from Indiana University to StarLight.

                                 

                                Updates will be provided to the FABRIC Announcements Forum.

                                in reply to: FABRIC MASS dataplane connection is currently down #8626
                                Tom Lehman
                                Participant

                                  The FABRIC MASS dataplane is back in service, this topic will be closed.

                                Viewing 15 posts - 1 through 15 (of 35 total)