1. Tom Lehman

Tom Lehman

Forum Replies Created

Viewing 15 posts - 16 through 30 (of 41 total)
  • Author
    Posts
  • in reply to: FABRIC MICH dataplane connection is currently down #8704
    Tom Lehman
    Participant

      The FABRIC MICH dataplane link is back in service. This topic will be closed.

      in reply to: Interconnection Details Between Hosts at the Same Site #8662
      Tom Lehman
      Participant

        Hi Fatih,

        I see you have a Layer 2 Bridge between two VMs, on different worker nodes, on site KANS.  One one VM you have a 100G NIC on the other is a 25G NIC.   Are not able to get around 25G between the two VMS?

        The only network element in between your VMs is a Cisco NCS5700, which should allow running at line rate.

        It should work better if you send from the 25G nic into the 100G nic.

        Have you tuned the VMs.  I typically run a tuning script like this:

        #!/bin/bash

        # Linux host tuning from https://fasterdata.es.net/host-tuning/linux/
        cat >> /etc/sysctl.conf <<EOL
        # allow testing with buffers up to 128MB
        net.core.rmem_max = 536870912
        net.core.wmem_max = 536870912
        # increase Linux autotuning TCP buffer limit to 64MB
        net.ipv4.tcp_rmem = 4096 87380 536870912
        net.ipv4.tcp_wmem = 4096 65536 536870912
        # recommended default congestion control is htcp or bbr
        net.ipv4.tcp_congestion_control=bbr
        # recommended for hosts with jumbo frames enabled
        net.ipv4.tcp_mtu_probing=1
        # recommended to enable ‘fair queueing’
        net.core.default_qdisc = fq
        #net.core.default_qdisc = fq_codel
        EOL

        sysctl –system

        # Turn on jumbo frames
        for dev in basename -a /sys/class/net/*; do
        ip link set dev $dev mtu 9000
        done

        Tom

        in reply to: INDI site, can’t route IPv4 (again) #8645
        Tom Lehman
        Participant

          The FABRIC INDI dataplane is back in service.  FABNet services should be working again.

          in reply to: FABRIC INDI dataplane connection currently down #8644
          Tom Lehman
          Participant

            FABRIC INDI  dataplane is back in service.  This topic will be closed.

            in reply to: INDI site, can’t route IPv4 (again) #8637
            Tom Lehman
            Participant

              The dataplane connection for the FABRIC INDI site is currently down.

              This only impacts slices which are using FABNet services that go over the dataplane connection to other FABRIC sites.

              Access to slice Virtual Machines, and FABRIC services within this FABRIC site, should work normally.

              The Indiana GigaPoP Network Operations Center is reporting a network outage for the fiber path from Indiana University to StarLight.

               

              Updates will be provided to the FABRIC Announcements Forum.

              in reply to: FABRIC MASS dataplane connection is currently down #8626
              Tom Lehman
              Participant

                The FABRIC MASS dataplane is back in service, this topic will be closed.

                in reply to: FABRIC FIU Dataplane connection is down #8447
                Tom Lehman
                Participant

                  The FABRIC FIU dataplane is back in service.  This topic will be closed.

                  Tom Lehman
                  Participant

                    The FABRIC SALT dataplane switch/router maintenance is complete.  This topic will be closed.

                    in reply to: FABRIC INDI Dataplane connection is down #8379
                    Tom Lehman
                    Participant

                      The FABRIC INDI dataplane is back in service.  This topic will be closed.

                      in reply to: INDI site, can’t route IPv4 #8374
                      Tom Lehman
                      Participant

                        The FABRIC INDI dataplane connection is down due to a fiber cut.  Updates will be provided on the dedicated posting “FABRIC INDI Dataplane is Down” in the FABRIC Announcements forum

                        in reply to: FABRIC TACC dataplane down #8342
                        Tom Lehman
                        Participant

                          The dataplane connection to FABRIC TACC is back in service.  This topic will be closed.

                          in reply to: FABRIC INDI Dataplane link – scheduled maintenance #8277
                          Tom Lehman
                          Participant

                            The FABRIC INDI dataplane link maintenance is complete, and link is back in service.  This topic will be closed.

                            in reply to: FABRIC GPN dataplane scheduled downtime for maintenance #8184
                            Tom Lehman
                            Participant

                              The GPN dataplane is back in service, this topic will be closed.

                              in reply to: FABRIC TACC and FABRIC INDI dataplane links are down #8178
                              Tom Lehman
                              Participant

                                Closing this topic, because the title includes two site, and only the FABRIC TACC dataplane link is still down.  A new topic will be opened for the FABRIC TACC dataplane.

                                in reply to: Infrastructure-metrics queries #7867
                                Tom Lehman
                                Participant

                                  I think the main consideration is that HC is based on 64 bit counters, and the non HC is based on 32 bit counters.  So for a high speed interface a 32 bit counter may roll over quite often.  Using a 64 bit counter is more convenient in terms of not having to monitor for counter roll overs.  The data in terms of packet counts should be equivalent.

                                Viewing 15 posts - 16 through 30 (of 41 total)