Home › Forums › FABRIC General Questions and Discussion › revive the ServiceXSlice?
- This topic has 4 replies, 2 voices, and was last updated 1 year, 2 months ago by Fengping Hu.
-
AuthorPosts
-
September 1, 2023 at 2:10 pm #5214
It looks we can no longer login to our long running slice. Here’s the slice information
SliceID 2d12324d-66bc-410a-8dda-3c00d1ea0d48
Name ServiceXSlice
The vms are still up but ssh into it failed.
ssh -F fabric_config/ssh_config -i fabric_config/fabric_bastion ubuntu@2001:400:a100:3090:f816:3eff:fe80:bfc7
Warning: Permanently added ‘bastion-1.fabric-testbed.net’ (ED25519) to the list of known hosts.
Warning: Permanently added ‘2001:400:a100:3090:f816:3eff:fe80:bfc7’ (ED25519) to the list of known hosts.
ubuntu@2001:400:a100:3090:f816:3eff:fe80:bfc7: Permission denied (publickey).Also pinging one of the public dataplane ips shows it’s prohibited.
$ ping6 2602:fcfb:1d:3::2
PING 2602:fcfb:1d:3::2(2602:fcfb:1d:3::2) 56 data bytes
From 2001:400:2004:12::3 icmp_seq=1 Destination unreachable: Administratively prohibitedIs it possible to revive this slice back into working state?
Thanks,
Fengping
September 1, 2023 at 4:39 pm #5215Fengping,
I checked the VMs on this slice (I will indicate the IPs below) and all of them have a sliver key as below.
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNo … nrsc4= sliverOn the FABRIC bastion hosts, I see the following (bastion) key
ecdsa-sha2-nistp256 AAAAE2VjZ … frtHLo= bastion_You can check your ssh configuration and sliver key accordingly.
For the FABNetv6Ext network, all VMs have their IPs in place and could ping the default gateway in their subnet (2602:fcfb:1d:2::/64). These IPs are also receiving traffic from external sources, so they seemed in good health.
However, I could not ping the IP you mentioned in the peering subnet (eg. 2602:fcfb:1d:3::/64). My visibility is limited for this peering subnet and I’m not sure where they are active. I notified our network team about this.
VMs:
2001:400:a100:3090:f816:3eff:fe56:acb7
2001:400:a100:3090:f816:3eff:fe80:bfc7
2001:400:a100:3090:f816:3eff:fe9c:3e41
2001:400:a100:3090:f816:3eff:fee3:ef05
2001:400:a100:3090:f816:3eff:fe8b:deb0
2001:400:a100:3090:f816:3eff:fe8a:f1d1
2001:400:a100:3090:f816:3eff:fe1c:385f
2001:400:a100:3090:f816:3eff:feaa:161a
2001:400:a100:3090:f816:3eff:fee2:d192
2001:400:a100:3090:f816:3eff:fe31:1eebSeptember 1, 2023 at 5:33 pm #5216Hi Mert,
Thanks for looking into it for me. Indeed I can login into the vms nows. Network is also fine. So you can withdraw the inquiry to your network team. I was using the wrong ips.
The problem is actually it seems that the node1(2001:400:a100:3090:f816:3eff:fe1c:385f) is rebooted and thus it lost three network links. Can you reattach the links for me.
For example on node with all the links it should be like this:
ubuntu@node9:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether fa:16:3e:56:ac:b7 brd ff:ff:ff:ff:ff:ff
3: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 02:e1:a2:04:48:a3 brd ff:ff:ff:ff:ff:ff
4: ens7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 06:d3:95:0b:44:81 brd ff:ff:ff:ff:ff:ff
5: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 0a:df:cf:c5:fd:f5 brd ff:ff:ff:ff:ff:ffbut on node one I get this
ubuntu@node1:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether fa:16:3e:1c:38:5f brd ff:ff:ff:ff:ff:ff
ubuntu@node1:~$if you could reattach ens7, ens8, ens9 to node1, that would be great.
Thanks,
Fengping
September 1, 2023 at 6:38 pm #5217Interfaces on node1 are re-attached with respect to the original devices.
ubuntu@node1:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:1c:38:5f brd ff:ff:ff:ff:ff:ff
inet 10.30.6.43/23 brd 10.30.7.255 scope global dynamic ens3
valid_lft 54283sec preferred_lft 54283sec
inet6 2001:400:a100:3090:f816:3eff:fe1c:385f/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86315sec preferred_lft 14315sec
inet6 fe80::f816:3eff:fe1c:385f/64 scope link
valid_lft forever preferred_lft forever
3: ens7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 02:7f:ae:44:cb:c9 brd ff:ff:ff:ff:ff:ff
4: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 02:bc:a6:3f:c7:cb brd ff:ff:ff:ff:ff:ff
inet6 fe80::bc:a6ff:fe3f:c7cb/64 scope link
valid_lft forever preferred_lft forever
5: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 06:e3:d6:00:5b:06 brd ff:ff:ff:ff:ff:ff
inet6 fe80::4e3:d6ff:fe00:5b06/64 scope link
valid_lft forever preferred_lft forever
September 5, 2023 at 2:38 pm #5224Hi Mert,
Thank you so much for the help. I have reconfigured everything and the slice is back in service.
Thanks,
Fengping
-
AuthorPosts
- You must be logged in to reply to this topic.