Forum Replies Created
-
AuthorPosts
-
STAR is online.
Interfaces on node1 are re-attached with respect to the original devices.
ubuntu@node1:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:1c:38:5f brd ff:ff:ff:ff:ff:ff
inet 10.30.6.43/23 brd 10.30.7.255 scope global dynamic ens3
valid_lft 54283sec preferred_lft 54283sec
inet6 2001:400:a100:3090:f816:3eff:fe1c:385f/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86315sec preferred_lft 14315sec
inet6 fe80::f816:3eff:fe1c:385f/64 scope link
valid_lft forever preferred_lft forever
3: ens7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 02:7f:ae:44:cb:c9 brd ff:ff:ff:ff:ff:ff
4: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 02:bc:a6:3f:c7:cb brd ff:ff:ff:ff:ff:ff
inet6 fe80::bc:a6ff:fe3f:c7cb/64 scope link
valid_lft forever preferred_lft forever
5: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 06:e3:d6:00:5b:06 brd ff:ff:ff:ff:ff:ff
inet6 fe80::4e3:d6ff:fe00:5b06/64 scope link
valid_lft forever preferred_lft forever
Fengping,
I checked the VMs on this slice (I will indicate the IPs below) and all of them have a sliver key as below.
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNo … nrsc4= sliverOn the FABRIC bastion hosts, I see the following (bastion) key
ecdsa-sha2-nistp256 AAAAE2VjZ … frtHLo= bastion_You can check your ssh configuration and sliver key accordingly.
For the FABNetv6Ext network, all VMs have their IPs in place and could ping the default gateway in their subnet (2602:fcfb:1d:2::/64). These IPs are also receiving traffic from external sources, so they seemed in good health.
However, I could not ping the IP you mentioned in the peering subnet (eg. 2602:fcfb:1d:3::/64). My visibility is limited for this peering subnet and I’m not sure where they are active. I notified our network team about this.
VMs:
2001:400:a100:3090:f816:3eff:fe56:acb7
2001:400:a100:3090:f816:3eff:fe80:bfc7
2001:400:a100:3090:f816:3eff:fe9c:3e41
2001:400:a100:3090:f816:3eff:fee3:ef05
2001:400:a100:3090:f816:3eff:fe8b:deb0
2001:400:a100:3090:f816:3eff:fe8a:f1d1
2001:400:a100:3090:f816:3eff:fe1c:385f
2001:400:a100:3090:f816:3eff:feaa:161a
2001:400:a100:3090:f816:3eff:fee2:d192
2001:400:a100:3090:f816:3eff:fe31:1eebFABRIC Nat64 solution is back online.
All VMs are active following a reboot. All network services are online. We apologize for the this inconvenience. Please let us know if you have any issues on your slivers.
Hello Bruce,
Inside this VM (205.172.170.76), an ssh public key identified as “fabric@localhost” is present under user account ubuntu.
VM is accessible with SSH from public internet. With the right ssh key and ssh client configurations as described on https://learn.fabric-testbed.net/knowledge-base/logging-into-fabric-vms/ you should be able to login to the VM.On the other hand, as you mention you are suddenly unable to login to the VMs, it could be a related to a change with the SSH key inside the VM. Can you confirm the ssh key that I mentioned above is the one that’s supposed to be present in the VM?
There may be multiple issues that we will be working on.
In order to eliminate the possibility of an issue with the “outside” IP address that you’re using, can you please send your IP address?
2 VMs were stopped by the hypervisor and I started them. Can you please check the status and your access?
Root cause of this problem is a known issue that we could correct on Phase-1 sites last month, but some of the Phase-2 sites did not receive this correction yet. We will find convenient time in the next few weeks, for now we will be able to help when this occurs.
July 12, 2023 at 7:32 pm in reply to: Maintenance on FABRIC-CLEM Dataplane – 7/12/23 (11am EST) #4667Maintenance is completed.
FABRIC-MASS is shutdown and will be online by the end of Multi-day FABRIC maintenance (June 12-June 16, 2023)
June 4, 2023 at 9:26 pm in reply to: FIU Moved to Maintenance due to network issues – 5/15/2023 #4466Work on network connectivity for FIU is still in progress. We are working with the hosting campus to resolve the issues. FIU will remain in maintenance until the end of multi-day maintenance that we will perform next week (6/12).
Update:
- GATECH worker-2 (GPU-worker) problem is resolved.
- Status of SRI will be posted separately. It will remain in maintenance until after the general maintenance next week.
FABRIC-MASS will be shutdown tomorrow (6/5) at 9am EST. All slivers on FABRIC-MASS should be deleted before the shutdown. Current VMs will be permanently removed, please make sure experiment data is backed up.
Completed.
Due to the volume of the active ssh connections on bastion-1 and to prevent interruptions on the experiments,
this maintenance will be performed at 9pm EST today (6/2) -
AuthorPosts