1. yoursunny

yoursunny

Forum Replies Created

Viewing 15 posts - 31 through 45 (of 60 total)
  • Author
    Posts
  • in reply to: Why is NDN packets not going through my network #4596
    yoursunny
    Participant

      You are using persistent face in NFD:

      face-created id=264 local=dev://ens7 remote=ether://[e8:eb:d3:81:b7:fe] persistency=persistent reliability=off congestion-marking=off congestion-marking-interval=100ms default-congestion-threshold=65536B mtu=1500

      This kind of face would auto-close upon socket error.

      My guess is that, the face experienced a socket error and thus automatically closed. You can confirm or reject the hypothesis by looking at nfdc face list command output and checking whether the face has disappeared.

      in reply to: Why is NDN packets not going through my network #4593
      yoursunny
      Participant

        Please upload your experiment notebook.

        If you typed commands into the SSH terminals to set up NDN software, please also describe exactly which commands were typed, and paste the output of each command.

        in reply to: FABNetv4Ext in non-Jupyter script #4267
        yoursunny
        Participant

          The infrastructure problem seems to be resolving.
          The second error (“actual result 2”) is no longer occurring.

          The first error (“actual result 1”) seems to be a fablib bug and it still occurs.

          in reply to: Authentication failure while enabling public IPv4 #4210
          yoursunny
          Participant

            paramiko.ssh_exception.AuthenticationException: Authentication failed.

            This suggests that fablib cannot connect to either the bastion or the node via SSH.
            It has nothing to do with FABNetv4Ext.

            ssh ${Username}@${Management IP}

            This suggests that your fabric_rc file is outdated.
            You need to rerun the configure.ipynb notebook.

            See also: https://learn.fabric-testbed.net/forums/topic/broken-get_ssh_command/#post-3693

            • This reply was modified 2 years, 4 months ago by yoursunny.
            in reply to: Authentication failure while enabling public IPv4 #4195
            yoursunny
            Participant

              I tried my usual script of acquiring public IPv4 address, operating on behalf of a project that has the Net.FABNetv4Ext permission.
              https://github.com/yoursunny/fabric/tree/5d434c3117314730a9ab38ffd4eefcab70f13779/ipv4 , see v4pub.py and demo-v4pub.py.
              It works correctly and can acquire public IPv4 addresses for nodes that need it.

              However, I’m having trouble with FABRIC’s jupyter-examples.
              https://github.com/fabric-testbed/jupyter-examples/blob/rel1.4.5/fabric_examples/beta_functionality/rel1.4/create_l3network_fabnet_ext.ipynb
              (I commented out the UKY line)

              For both networks defined in the notebook, get_subnet() returns None.
              Consequently, “Update Network Service – Enable/Disable Public IP Addresses” failed with error:

              TypeError: 'NoneType' object is not subscriptable
              in reply to: IPv6 on FABRIC: A hop with a low MTU #4183
              yoursunny
              Participant

                We need to do some more testing for all the links in the network to see if we can find a single value that works everywhere.

                Use my script:

                https://github.com/yoursunny/fabric/blob/5d434c3117314730a9ab38ffd4eefcab70f13779/util/mtu.py

                in reply to: Enable DPDK on Fabric Nodes #4182
                yoursunny
                Participant

                  On nodes created some time ago using Debian OS 10 it was possible to check the active DPDK service (sudo service dpdk status). Which is currently not possible.

                  This just means that DPDK isn’t preinstalled, which arguably is a good thing as there are many compile-time options that can optimize for performance. You can install it yourself from DPDK source code.

                  in reply to: Authentication failure while enabling public IPv4 #4181
                  yoursunny
                  Participant

                    FABNetv4Ext network service requires Net.FABNetv4Ext permission.

                    If your project doesn’t have this permission, you’ll need to request it via ticket.

                    in reply to: IPv6 on FABRIC: A hop with a low MTU #4112
                    yoursunny
                    Participant

                      MTU is good now (except MASS).
                      I made a slice in every available location with FABNetv4 network service, tested ping with a few MTUs (256, 1280, 1420, 1500, 8900, 8948, 9000).
                      They can all support MTU 8948 (IPv4 ping -s 8920), but not MTU 9000 (IPv4 ping -s 8972).

                      IPv4 ping MTU and RTT
                      src\dst  |   CERN   |   UCSD   |   DALL   |   NCSA   |   CLEM   |   TACC   |   MAX    |   WASH   |   GPN    |   INDI   |   FIU    |   MICH   |   MASS   |   UTAH   |   SALT   |   STAR  
                      ---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------
                      CERN     | 9000   0 | 8948 148 | 8948 122 | 8948 105 | 8948 105 | 8948 128 | 8948  91 | 8948  88 | 8948 156 | 8948 107 | 8948 115 | 8948 108 | 1420 101 | 8948 133 | 8948 133 | 8948 102
                      UCSD     | 8948 148 | 9000   0 | 8948  43 | 8948  48 | 8948  76 | 8948  49 | 8948  62 | 8948  59 | 8948  37 | 8948  50 | 8948  86 | 8948  50 | 1420  72 | 8948  14 | 8948  14 | 8948  45
                      DALL     | 8948 122 | 8948  43 | 9000   0 | 8948  22 | 8948  50 | 8948   5 | 8948  36 | 8948  34 | 8948  51 | 8948  24 | 8948  60 | 8948  25 | 1420  46 | 8948  28 | 8948  28 | 8948  19
                      NCSA     | 8948 105 | 8948  48 | 8948  22 | 9000   0 | 8948  32 | 8948  28 | 8948  19 | 8948  16 | 8948  56 | 8948   7 | 8948  43 | 8948   7 | 1420  29 | 8948  33 | 8948  33 | 8948   2
                      CLEM     | 8948 105 | 8948  76 | 8948  50 | 8948  32 | 9000   0 | 8948  56 | 8948  19 | 8948  16 | 8948  84 | 8948  35 | 8948  43 | 8948  35 | 1420  28 | 8948  61 | 8948  61 | 8948  30
                      TACC     | 8948 128 | 8948  49 | 8948   5 | 8948  28 | 8948  56 | 9000   0 | 8948  42 | 8948  39 | 8948  57 | 8948  30 | 8948  66 | 8948  30 | 1420  52 | 8948  34 | 8948  34 | 8948  25
                      MAX      | 8948  91 | 8948  62 | 8948  36 | 8948  19 | 8948  19 | 8948  42 | 9000   0 | 8948   2 | 8948  70 | 8948  21 | 8948  29 | 8948  22 | 1420  15 | 8948  47 | 8948  47 | 8948  17
                      WASH     | 8948  88 | 8948  59 | 8948  34 | 8948  16 | 8948  16 | 8948  39 | 8948   2 | 9000   0 | 8948  67 | 8948  18 | 8948  26 | 8948  19 | 1420  12 | 8948  44 | 8948  44 | 8948  14
                      GPN      | 8948 156 | 8948  37 | 8948  51 | 8948  56 | 8948  84 | 8948  57 | 8948  70 | 8948  67 | 9000   0 | 8948  58 | 8948  94 | 8948  58 | 1420  80 | 8948  22 | 8948  23 | 8948  53
                      INDI     | 8948 107 | 8948  50 | 8948  24 | 8948   7 | 8948  35 | 8948  30 | 8948  21 | 8948  18 | 8948  58 | 9000   0 | 8948  45 | 8948   9 | 1420  31 | 8948  35 | 8948  35 | 8948   4
                      FIU      | 8948 115 | 8948  86 | 8948  60 | 8948  43 | 8948  43 | 8948  66 | 8948  29 | 8948  26 | 8948  94 | 8948  45 | 9000   0 | 8948  46 | 1420  39 | 8948  71 | 8948  71 | 8948  40
                      MICH     | 8948 108 | 8948  50 | 8948  25 | 8948   7 | 8948  35 | 8948  30 | 8948  22 | 8948  19 | 8948  58 | 8948   9 | 8948  46 | 9000   0 | 1420  31 | 8948  36 | 8948  35 | 8948   5
                      MASS     | 1420 101 | 1420  72 | 1420  46 | 1420  29 | 1420  28 | 1420  52 | 1420  15 | 1420  12 | 1420  80 | 1420  31 | 1420  39 | 1420  31 | 9000   0 | 1420  57 | 1420  57 | 1420  26
                      UTAH     | 8948 133 | 8948  14 | 8948  28 | 8948  33 | 8948  61 | 8948  34 | 8948  47 | 8948  44 | 8948  22 | 8948  35 | 8948  71 | 8948  36 | 1420  57 | 9000   0 | 8948   0 | 8948  30
                      SALT     | 8948 133 | 8948  14 | 8948  28 | 8948  33 | 8948  61 | 8948  34 | 8948  47 | 8948  44 | 8948  23 | 8948  35 | 8948  71 | 8948  35 | 1420  57 | 8948   0 | 9000   0 | 8948  30
                      STAR     | 8948 102 | 8948  45 | 8948  19 | 8948   2 | 8948  30 | 8948  25 | 8948  17 | 8948  14 | 8948  53 | 8948   4 | 8948  40 | 8948   5 | 1420  26 | 8948  30 | 8948  30 | 9000   0
                      
                      in reply to: IPv6 on FABRIC: A hop with a low MTU #4080
                      yoursunny
                      Participant

                        MTU issue is discovered between MASS and STAR on the experiment network.
                        I increased MTU of every netif to 9000, but the largest IPv4 ping that can pass through is 1424.
                        Slice ID: 3b8d1e30-8c17-45b2-9e78-4e59f69cfc3e

                        ubuntu@NA:~$ ping -M do -c 4 -s 1424 192.168.8.2
                        PING 192.168.8.2 (192.168.8.2) 1424(1452) bytes of data.
                        1432 bytes from 192.168.8.2: icmp_seq=1 ttl=64 time=26.8 ms
                        1432 bytes from 192.168.8.2: icmp_seq=2 ttl=64 time=26.7 ms
                        1432 bytes from 192.168.8.2: icmp_seq=3 ttl=64 time=26.7 ms
                        1432 bytes from 192.168.8.2: icmp_seq=4 ttl=64 time=26.7 ms
                        
                        --- 192.168.8.2 ping statistics ---
                        4 packets transmitted, 4 received, 0% packet loss, time 3005ms
                        rtt min/avg/max/mdev = 26.659/26.695/26.765/0.041 ms
                        ubuntu@NA:~$ ping -M do -c 4 -s 1425 192.168.8.2
                        PING 192.168.8.2 (192.168.8.2) 1425(1453) bytes of data.
                        
                        --- 192.168.8.2 ping statistics ---
                        4 packets transmitted, 0 received, 100% packet loss, time 3050ms

                         

                        in reply to: IPv6 on FABRIC: A hop with a low MTU #4059
                        yoursunny
                        Participant

                          I’m seeing MTU issues on the data plane network between SALT and UTAH.
                          The scenario is using NIC_ConnectX_5 NICs and L2PTP network service.

                          I increased the MTU of VLAN netifs to 9000, and assigned IPv4 addresses to both ends.
                          The maximum ICMP ping size that can pass through is 1472.

                          ubuntu@9bf529e4-3efc-4988-a9f8-5089ccfa08af-nb:~$ ping -M do -c 4 -s 1472 192.168.8.1
                          PING 192.168.8.1 (192.168.8.1) 1472(1500) bytes of data.
                          1480 bytes from 192.168.8.1: icmp_seq=1 ttl=64 time=0.348 ms
                          1480 bytes from 192.168.8.1: icmp_seq=2 ttl=64 time=0.225 ms
                          1480 bytes from 192.168.8.1: icmp_seq=3 ttl=64 time=0.227 ms
                          1480 bytes from 192.168.8.1: icmp_seq=4 ttl=64 time=0.192 ms
                          
                          --- 192.168.8.1 ping statistics ---
                          4 packets transmitted, 4 received, 0% packet loss, time 3051ms
                          rtt min/avg/max/mdev = 0.192/0.248/0.348/0.059 ms
                          
                          ubuntu@9bf529e4-3efc-4988-a9f8-5089ccfa08af-nb:~$ ping -M do -c 4 -s 1473 192.168.8.1
                          PING 192.168.8.1 (192.168.8.1) 1473(1501) bytes of data.
                          
                          --- 192.168.8.1 ping statistics ---
                          4 packets transmitted, 0 received, 100% packet loss, time 3074ms
                          
                          
                          
                          in reply to: File save error and Load file error #3728
                          yoursunny
                          Participant

                            there is no way to upload files from local to Fabric nodes directly

                            It’s possible in two ways:

                            • Host your file with an HTTPS server somewhere on the Internet (with HTTP Basic authentication if desired), and download it on the nodes with wget command.
                            • Add the nodes into your local ~/.ssh/config with ProxyJump through the bastion, and then run scp to upload the file to the nodes.

                            I’ve done both in different experiments, but only the first one can be automated.

                            in reply to: L2Bridge without MAC learning? #3695
                            yoursunny
                            Participant

                              NIC_Basic is a Virtual Function (VF) on the ConnectX-6 Ethernet adapter. The hardware Ethernet adapter is shared among many VFs, and it determines which VF shall receive an incoming packet by matching the destination address. Therefore, NIC_Basic cannot receive Ethernet frames whose destination address differs from its own address.

                              in reply to: L2Bridge without MAC learning? #3689
                              yoursunny
                              Participant

                                it seems as if packets are filtered by MAC learning on the L2Bridge type network

                                What observation led you to this conclusion?

                                What are you trying to do, how it behaved, and how do you expect it to behave?

                                yoursunny
                                Participant

                                  When you invoke slice.submit(), the slice object (and the associated nodes, links, intfs) will not auto-update.
                                  You need to do another slice = fablib.get_slice(name=slice.get_name()) (and re-retrieve enclosed nodes, link, intfs if needed) to obtain updated information.

                                Viewing 15 posts - 31 through 45 (of 60 total)