1. Fengping Hu

Fengping Hu

Forum Replies Created

Viewing 15 posts - 16 through 30 (of 41 total)
  • Author
    Posts
  • in reply to: manual cleanup needed? #4568
    Fengping Hu
    Participant

      Hi Komal,

      I tried to recreate the slice requesting only 100G disks but it still fails.

      The portal to show dead slice works. Now the portal lists 2 dead slices and 6 configuring slices for me. Is there a way for me to delete all of them? I wonder if these dead slices continue to hold resources from becoming available.

      Thanks,

      Fengping

       

      in reply to: lost management network connection #4229
      Fengping Hu
      Participant

        Thanks Ilya. I tried ip link down and up and this was able to bring the management network online again. I would just need to readd PBR since ip link down would clear that it seems. But that’s not a problem.  With what you said I think I will just manually deal with potential blinks and not worry about it too much.

        Fengping

        in reply to: exceptions when adding a node to an existing slice #4228
        Fengping Hu
        Participant

          Nice. Tried with set_fablib_data function after running os commands and this worked well. So I can now implement NIC to NET mapping again.  Thank you so much!

          Fengping

          in reply to: lost management network connection #4220
          Fengping Hu
          Participant

            Thanks for the clarification. We do still have access to the VM via the data plane public ip. So I think our setup is resilient in that sense.

            Thanks,
            Fengping

            in reply to: exceptions when adding a node to an existing slice #4214
            Fengping Hu
            Participant

              That worked nicely. Thanks!

              Another challenge I have is to implement same NIC to Network mapping across all VMs in the slice. For example NET1 is on ens7, NET2 is on ens8 etc. With the previous api, I was able to do this with the following code block. With the new api,
              the node.get_interface call fails with “Interface not found” exception after the interface name is changed in OS. It seems with the new api, the interface name is kept in the data structure and not updated after it is updated in OS.

              The goal is to have a way to implement same NIC to Network mapping. We don’t have to use the following code . Thanks!

              try:
              for node in slice.get_nodes():
              stdout, stderr = node.execute(f’sudo ip link set ens7 name temp1′)
              stdout, stderr = node.execute(f’sudo ip link set ens8 name temp2′)
              stdout, stderr = node.execute(f’sudo ip link set ens9 name temp3′)
              for if_name,net_name in zip([“ens7″,”ens8″,”ens9”],[“NET1″,”NET2″,”NET3″]):
              iface = node.get_interface(network_name=net_name)
              if_name_tmp = iface.get_device_name()
              node.execute(f’sudo ip link set {if_name_tmp} name {if_name}’)
              except Exception as e:
              print(f”Fail: {e}”)
              traceback.print_exc()

              in reply to: lost management network connection #4196
              Fengping Hu
              Participant

                Hi David,

                Yep. It’s an ubuntu.

                root@node1:/home/ubuntu# ip -6 rule list
                0: from all lookup local
                32761: from 2602:fcfb:100::/64 lookup v6peering
                32762: from 2001:400:a100:3090::/64 lookup admin
                32766: from all lookup main
                root@node1:/home/ubuntu# ip -6 route show table admin
                default via fe80::f816:3eff:feac:1ca0 dev ens3 metric 1024 pref medium

                Thanks,
                Fengping

                in reply to: lost management network connection #4190
                Fengping Hu
                Participant

                  It appears the linklocal address of the router is stale. So it looks as if it lost the layer 2 connection. The policy based routing seems ok and it’s the same as with the other two nodes that are working.

                  ip -6 nei
                  fe80::f816:3eff:feac:1ca0 dev ens3 lladdr fa:16:3e:ac:1c:a0 router STALE

                  in reply to: lost management network connection #4188
                  Fengping Hu
                  Participant

                    Yep. We configured the default route to be on the public network on the dataplane. The management network is routed via policy based routing.

                    in reply to: lost management network connection #4186
                    Fengping Hu
                    Participant

                      Hi Mert,

                      Here are the outputs of those commands.

                      Thanks,
                      Fengping

                      `root@node1:/home/ubuntu# ip a
                      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
                      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
                      inet 127.0.0.1/8 scope host lo
                      valid_lft forever preferred_lft forever
                      inet6 ::1/128 scope host
                      valid_lft forever preferred_lft forever
                      2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP group default qlen 1000
                      link/ether fa:16:3e:5c:dd:28 brd ff:ff:ff:ff:ff:ff
                      inet 10.30.6.141/23 brd 10.30.7.255 scope global dynamic ens3
                      valid_lft 76573sec preferred_lft 76573sec
                      inet6 2001:400:a100:3090:f816:3eff:fe5c:dd28/64 scope global dynamic mngtmpaddr noprefixroute
                      valid_lft 86351sec preferred_lft 14351sec
                      inet6 fe80::f816:3eff:fe5c:dd28/64 scope link
                      valid_lft forever preferred_lft forever
                      3: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
                      link/ether 02:34:c0:a0:19:80 brd ff:ff:ff:ff:ff:ff
                      inet 10.143.1.2/24 scope global ens7
                      valid_lft forever preferred_lft forever
                      inet6 fe80::34:c0ff:fea0:1980/64 scope link
                      valid_lft forever preferred_lft forever
                      4: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
                      link/ether 02:78:91:60:1e:80 brd ff:ff:ff:ff:ff:ff
                      inet6 2602:fcfb:100::10/64 scope global
                      valid_lft forever preferred_lft forever
                      inet6 fe80::78:91ff:fe60:1e80/64 scope link
                      valid_lft forever preferred_lft forever
                      5: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
                      link/ether 0a:39:34:14:6a:36 brd ff:ff:ff:ff:ff:ff
                      inet6 2602:fcfb:1d:2::2/64 scope global
                      valid_lft forever preferred_lft forever
                      inet6 fe80::839:34ff:fe14:6a36/64 scope link
                      valid_lft forever preferred_lft forever
                      6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
                      link/ether f2:c7:6e:00:6e:83 brd ff:ff:ff:ff:ff:ff
                      inet 10.233.0.1/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.0.3/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.42.207/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.29.52/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.45.34/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.16.205/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.61.6/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.17.21/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.46.5/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.51.50/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.15.146/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.8.168/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.60.15/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.24.72/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.57.146/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.11.198/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.39.214/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.30.20/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.23.34/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.53.112/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.55.86/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.7.33/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.1.29/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet 10.233.32.37/32 scope global kube-ipvs0
                      valid_lft forever preferred_lft forever
                      inet6 2602:fcfb:1d:2::31/128 scope global
                      valid_lft forever preferred_lft forever
                      inet6 fd85:ee78:d8a6:8607::11d7/128 scope global
                      valid_lft forever preferred_lft forever
                      inet6 2602:fcfb:1d:2::30/128 scope global
                      valid_lft forever preferred_lft forever
                      inet6 fd85:ee78:d8a6:8607::1417/128 scope global
                      valid_lft forever preferred_lft forever
                      10: cali5260a6a960b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
                      link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-f19391b0-0d22-5e0a-091b-f26982131f4a
                      inet6 fe80::ecee:eeff:feee:eeee/64 scope link
                      valid_lft forever preferred_lft forever
                      11: nodelocaldns: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
                      link/ether e6:a9:23:78:b6:d9 brd ff:ff:ff:ff:ff:ff
                      inet 169.254.25.10/32 scope global nodelocaldns
                      valid_lft forever preferred_lft forever
                      23: califc30ed8b084@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
                      link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-0e75fb3c-1182-2e6e-16c9-80ba6ce3844f
                      inet6 fe80::ecee:eeff:feee:eeee/64 scope link
                      valid_lft forever preferred_lft forever
                      821: cali0894f966cdf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
                      link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-ee474322-80e1-7721-a798-bc29a08f2339
                      inet6 fe80::ecee:eeff:feee:eeee/64 scope link
                      valid_lft forever preferred_lft forever
                      855: cali68b9b56a958@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
                      link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-e232ac9b-3c1d-fe2e-dd33-cc9f06d121a2
                      inet6 fe80::ecee:eeff:feee:eeee/64 scope link
                      valid_lft forever preferred_lft forever
                      741: cali3de775f9cb2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
                      link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-3b3f6547-73f2-7f83-add2-d43a23bc1a83
                      inet6 fe80::ecee:eeff:feee:eeee/64 scope link
                      valid_lft forever preferred_lft forever
                      747: cali0fe3bc102a1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
                      link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-9d7ab47a-4e5c-cb1b-86da-65fb323732fb
                      inet6 fe80::ecee:eeff:feee:eeee/64 scope link
                      valid_lft forever preferred_lft forever
                      754: calif913c9d9a21@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
                      link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-7b67f49b-0c8a-cf9d-e976-0439a6057cdd
                      inet6 fe80::ecee:eeff:feee:eeee/64 scope link
                      valid_lft forever preferred_lft forever
                      767: calieb3159a0e6d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
                      link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-0841de8e-b94c-b3cb-50d0-60b2156da119
                      inet6 fe80::ecee:eeff:feee:eeee/64 scope link
                      valid_lft forever preferred_lft forever
                      root@node1:/home/ubuntu# ip route
                      default via 10.143.1.1 dev ens7
                      10.30.6.0/23 dev ens3 proto kernel scope link src 10.30.6.141
                      10.143.1.0/24 dev ens7 proto kernel scope link src 10.143.1.2
                      10.233.71.0/26 via 10.143.1.4 dev ens7 proto bird
                      10.233.74.64/26 via 10.143.1.5 dev ens7 proto bird
                      10.233.75.0/26 via 10.143.1.3 dev ens7 proto bird
                      blackhole 10.233.102.128/26 proto bird
                      10.233.102.129 dev cali5260a6a960b scope link
                      10.233.102.134 dev cali68b9b56a958 scope link
                      10.233.102.139 dev califc30ed8b084 scope link
                      10.233.102.161 dev cali3de775f9cb2 scope link
                      10.233.102.166 dev cali0fe3bc102a1 scope link
                      10.233.102.173 dev calif913c9d9a21 scope link
                      10.233.102.185 dev cali0894f966cdf scope link
                      10.233.102.188 dev calieb3159a0e6d scope link
                      root@node1:/home/ubuntu# ip -6 route
                      ::1 dev lo proto kernel metric 256 pref medium
                      2001:400:a100:3090::/64 dev ens3 proto ra metric 100 expires 86399sec pref medium
                      2602:fcfb:1d:2::/64 dev ens9 proto kernel metric 256 pref medium
                      2602:fcfb:1d:2::/64 dev ens9 metric 1024 pref medium
                      2602:fcfb:100::/64 dev ens8 proto kernel metric 256 pref medium
                      2602:fcfb:100::/64 dev ens8 metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:340/122 via 2602:fcfb:1d:2::5 dev ens9 proto bird metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:6800/122 via 2602:fcfb:1d:2::3 dev ens9 proto bird metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:8700/122 via 2602:fcfb:1d:2::4 dev ens9 proto bird metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:a681 dev cali5260a6a960b metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:a686 dev cali68b9b56a958 metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:a68b dev califc30ed8b084 metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:a6a1 dev cali3de775f9cb2 metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:a6a6 dev cali0fe3bc102a1 metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:a6ad dev calif913c9d9a21 metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:a6b9 dev cali0894f966cdf metric 1024 pref medium
                      fd85:ee78:d8a6:8607::1:a6bc dev calieb3159a0e6d metric 1024 pref medium
                      blackhole fd85:ee78:d8a6:8607::1:a680/122 dev lo proto bird metric 1024 pref medium
                      fe80::a9fe:a9fe via fe80::f816:3eff:feac:1ca0 dev ens3 proto ra metric 1024 expires 299sec pref medium
                      fe80::/64 dev ens3 proto kernel metric 256 pref medium
                      fe80::/64 dev ens7 proto kernel metric 256 pref medium
                      fe80::/64 dev ens8 proto kernel metric 256 pref medium
                      fe80::/64 dev ens9 proto kernel metric 256 pref medium
                      fe80::/64 dev cali5260a6a960b proto kernel metric 256 pref medium
                      fe80::/64 dev califc30ed8b084 proto kernel metric 256 pref medium
                      fe80::/64 dev cali3de775f9cb2 proto kernel metric 256 pref medium
                      fe80::/64 dev cali0fe3bc102a1 proto kernel metric 256 pref medium
                      fe80::/64 dev calif913c9d9a21 proto kernel metric 256 pref medium
                      fe80::/64 dev calieb3159a0e6d proto kernel metric 256 pref medium
                      fe80::/64 dev cali0894f966cdf proto kernel metric 256 pref medium
                      fe80::/64 dev cali68b9b56a958 proto kernel metric 256 pref medium
                      default via 2602:fcfb:1d:2::1 dev ens9 metric 10 pref medium
                      default via fe80::f816:3eff:feac:1ca0 dev ens3 proto ra metric 100 expires 299sec mtu 9000 pref medium
                      root@node1:/home/ubuntu# ip -6 rule show
                      0: from all lookup local
                      32761: from 2602:fcfb:100::/64 lookup v6peering
                      32762: from 2001:400:a100:3090::/64 lookup admin
                      32766: from all lookup main
                      root@node1:/home/ubuntu# ip -6 route show table admin
                      default via fe80::f816:3eff:feac:1ca0 dev ens3 metric 1024 pref medium

                      root@node1:/home/ubuntu# systemctl status systemd-networkd
                      ● systemd-networkd.service – Network Service
                      Loaded: loaded (/lib/systemd/system/systemd-networkd.service; enabled; vendor preset: enabled)
                      Active: active (running) since Fri 2023-04-07 06:46:29 UTC; 1 months 2 days ago
                      TriggeredBy: ● systemd-networkd.socket
                      Docs: man:systemd-networkd.service(8)
                      Main PID: 313692 (systemd-network)
                      Status: “Processing requests…”
                      Tasks: 1 (limit: 464085)
                      Memory: 21.9M
                      CGroup: /system.slice/systemd-networkd.service
                      └─313692 /lib/systemd/systemd-networkd

                      May 09 19:29:21 node1 systemd-networkd[313692]: califcd611d2e5a: Lost carrier
                      May 09 19:29:23 node1 systemd-networkd[313692]: cali6d43fe7c75d: Link DOWN
                      May 09 19:29:23 node1 systemd-networkd[313692]: cali6d43fe7c75d: Lost carrier
                      May 09 19:54:08 node1 systemd-networkd[313692]: cali0905f2a129d: Link DOWN
                      May 09 19:54:08 node1 systemd-networkd[313692]: cali0905f2a129d: Lost carrier
                      May 09 19:54:45 node1 systemd-networkd[313692]: calidba7a9afee0: Link DOWN
                      May 09 19:54:45 node1 systemd-networkd[313692]: calidba7a9afee0: Lost carrier
                      May 09 19:55:37 node1 systemd-networkd[313692]: cali68b9b56a958: Link UP
                      May 09 19:55:37 node1 systemd-networkd[313692]: cali68b9b56a958: Gained carrier
                      May 09 19:55:39 node1 systemd-networkd[313692]: cali68b9b56a958: Gained IPv6LL

                      in reply to: Duplicate ipv6 ips(one is external facing) in two slices #4016
                      Fengping Hu
                      Participant

                        Hi Komal,

                        Thanks for the details. I just randomly picked the number 100. I believe 90 address should be enough for my use case.

                        Thank you so much for getting this figure out so quickly.

                        Have a good weekend!
                        Fengping

                        in reply to: Duplicate ipv6 ips(one is external facing) in two slices #4012
                        Fengping Hu
                        Participant

                          Hi Komal,

                          Thanks for getting a fix in so quickly. It seems I still have some issues(could be a different issue)

                          I created the slice and then tried to call the change_public_ip function. Then the whole network service disappeared from the slice. One thing to note is that I use this network3.change_public_ip(ipv6=list(map(str,network3_available_ips[0:100])))

                          instead of

                          network3.change_public_ip(ipv6=[str(network3_available_ips[0])]).

                          My intention is to change_public_ip for 100 address instead of just one.  Since the ipv6 parameter is a list,  I assume this would work.

                          Here’s the slice information.

                          ID
                          1fe8c351-e6be-4205-9c23-e77b9ccd8a41
                          Name
                          ServiceXSlice1

                          Network service before the change_public_ip call
                          b2483916-e980-490d-9315-c7705d7264c0 NET1 L3 FABNetv4 CERN 10.143.2.1 10.143.2.0/24 Active
                          0ea13f16-c58b-4ef0-945c-d6c92493190a NET2 L3 FABNetv6 CERN 2602:fcfb:1d:2::1 2602:fcfb:1d:2::/64 Active
                          4a338812-470d-41fc-a510-145dfd775816 NET3 L3 FABNetv6Ext CERN 2602:fcfb:1d:3::1 2602:fcfb:1d:3::/64 Active

                          Network service after the change_public_ip call
                          b2483916-e980-490d-9315-c7705d7264c0 NET1 L3 FABNetv4 CERN 10.143.2.1 10.143.2.0/24 Active
                          0ea13f16-c58b-4ef0-945c-d6c92493190a NET2 L3 FABNetv6 CERN 2602:fcfb:1d:2::1 2602:fcfb:1d:2::/64 Active

                          Thanks,

                          Fengping

                          in reply to: IPV6 EXT network service #3989
                          Fengping Hu
                          Participant

                            I did add a route. Though I can’t even reach the gateway from the FABRIC node itself.  It appears the gateway can’t be discovered as a neighbor.  Also in my case the interface is actually ens9. ens7 and ens8 are for other purposes.

                            Here are the route entries on the node.

                            2602:fcfb:1d:1::/64 dev ens9 proto kernel metric 256 pref medium
                            2605:9a00:10:200a::/64 via 2602:fcfb:1d:1::1 dev ens9 metric 1024 pref medium

                            Here’s the ndp table — the gateway is in failed state.

                            # ip -6 neighbor
                            2602:fcfb:1d:2::3 dev ens8 lladdr 06:7b:c6:c7:1a:66 STALE
                            fe80::f816:3eff:feac:1ca0 dev ens3 lladdr fa:16:3e:ac:1c:a0 router REACHABLE
                            fe80::2204:fff:fec5:4648 dev ens3 lladdr 20:04:0f:c5:46:48 router STALE
                            fe80::40:fcff:fe51:8b17 dev ens9 lladdr 02:40:fc:51:8b:17 STALE
                            2602:fcfb:1d:1::3 dev ens9 lladdr 02:40:fc:51:8b:17 STALE
                            2602:fcfb:1d:2::1 dev ens8 lladdr 82:31:bd:61:8f:dc router STALE
                            fe80::8031:bdff:fe61:8fe8 dev ens8 lladdr 82:31:bd:61:8f:e8 router STALE
                            fe80::8031:bdff:fe61:8fe9 dev ens9 lladdr 82:31:bd:61:8f:e9 router STALE
                            2602:fcfb:1d:1::1 dev ens9 FAILED
                            fe80::8031:bdff:fe61:8fdc dev ens8 lladdr 82:31:bd:61:8f:dc router STALE
                            fe80::47b:c6ff:fec7:1a66 dev ens8 lladdr 06:7b:c6:c7:1a:66 STALE

                             

                             

                            in reply to: IPV6 EXT network service #3983
                            Fengping Hu
                            Participant

                              Hi Devin,

                              Thanks for the tutorial. I was able to create a slice with FABNetv6Ext network type. However, it seems I can’t ping the gateway of the network type FABNetv6Ext from the node.  Though the gateway of the type FABNetv6 is responding to pings. Any ideas?

                              Thanks,

                              Fengping

                              root@b96eb96e-3929-4c17-a117-9b8ee0660eeb-node1:/home/ubuntu# ip -6 a
                              1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
                              inet6 ::1/128 scope host
                              valid_lft forever preferred_lft forever
                              2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 state UP qlen 1000
                              inet6 2001:400:a100:3090:f816:3eff:fec3:4d0f/64 scope global dynamic mngtmpaddr noprefixroute
                              valid_lft 86337sec preferred_lft 14337sec
                              inet6 fe80::f816:3eff:fec3:4d0f/64 scope link
                              valid_lft forever preferred_lft forever
                              3: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
                              inet6 2602:fcfb:1d:2::6/64 scope global
                              valid_lft forever preferred_lft forever
                              inet6 fe80::e1:a2ff:fe04:48a3/64 scope link
                              valid_lft forever preferred_lft forever
                              4: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
                              inet6 fe80::4d3:95ff:fe0b:4481/64 scope link
                              valid_lft forever preferred_lft forever
                              5: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
                              inet6 2602:fcfb:1d:1::2/64 scope global
                              valid_lft forever preferred_lft forever
                              inet6 fe80::8df:cfff:fec5:fdf5/64 scope link
                              valid_lft forever preferred_lft forever
                              root@b96eb96e-3929-4c17-a117-9b8ee0660eeb-node1:/home/ubuntu# ip -6 route
                              ::1 dev lo proto kernel metric 256 pref medium
                              2001:400:a100:3090::/64 dev ens3 proto ra metric 100 expires 86331sec pref medium
                              2602:fcfb:1d:1::/64 dev ens9 proto kernel metric 256 pref medium
                              2602:fcfb:1d:2::/64 dev ens8 proto kernel metric 256 pref medium
                              2605:9a00:10:200a::/64 via 2602:fcfb:1d:1::1 dev ens9 metric 1024 pref medium
                              fe80::a9fe:a9fe via fe80::f816:3eff:feac:1ca0 dev ens3 proto ra metric 1024 expires 231sec pref medium
                              fe80::/64 dev ens3 proto kernel metric 256 pref medium
                              fe80::/64 dev ens7 proto kernel metric 256 pref medium
                              fe80::/64 dev ens8 proto kernel metric 256 pref medium
                              fe80::/64 dev ens9 proto kernel metric 256 pref medium
                              default via fe80::f816:3eff:feac:1ca0 dev ens3 proto ra metric 100 expires 231sec mtu 9000 pref medium
                              root@b96eb96e-3929-4c17-a117-9b8ee0660eeb-node1:/home/ubuntu# ping 2602:fcfb:1d:2::1
                              PING 2602:fcfb:1d:2::1(2602:fcfb:1d:2::1) 56 data bytes
                              64 bytes from 2602:fcfb:1d:2::1: icmp_seq=1 ttl=64 time=1.08 ms
                              64 bytes from 2602:fcfb:1d:2::1: icmp_seq=2 ttl=64 time=0.882 ms
                              64 bytes from 2602:fcfb:1d:2::1: icmp_seq=3 ttl=64 time=1.04 ms
                              64 bytes from 2602:fcfb:1d:2::1: icmp_seq=4 ttl=64 time=0.641 ms
                              ^C
                              — 2602:fcfb:1d:2::1 ping statistics —
                              4 packets transmitted, 4 received, 0% packet loss, time 3010ms
                              rtt min/avg/max/mdev = 0.641/0.911/1.080/0.172 ms
                              root@b96eb96e-3929-4c17-a117-9b8ee0660eeb-node1:/home/ubuntu# ping 2602:fcfb:1d:1::1
                              PING 2602:fcfb:1d:1::1(2602:fcfb:1d:1::1) 56 data bytes
                              ^C
                              — 2602:fcfb:1d:1::1 ping statistics —
                              3 packets transmitted, 0 received, 100% packet loss, time 2049ms

                              • This reply was modified 1 year, 9 months ago by Fengping Hu.
                              in reply to: slice active but node no longer accessible #1135
                              Fengping Hu
                              Participant

                                That make sense. I think it’s the dhcp ip lease time that got expired when the networkmanager was disabled. I think we should be good now. Thanks!

                                in reply to: slice active but node no longer accessible #1133
                                Fengping Hu
                                Participant

                                  Thanks for the clarification about reboot.

                                  We don’t really have a need to reboot the vms. But I believe the vms in a slice will be rebooted after one day — guessed from the fact that we lose contact to vms after 1 day. With the Networkmanager fix, we will still be able to access the vms via management ip but the vms in slice can no longer form a cluster without the eth1.

                                  So the big question for now: Is there a way to avoid vm getting rebooted during the slice lease period. Especially for those with attached devices.

                                   

                                Viewing 15 posts - 16 through 30 (of 41 total)