Forum Replies Created
-
AuthorPosts
-
You can create slices until 12/29 23:59 UTC. After that date, new slices cannot be created. All existing slices will stay active until Jan 1st 5pm EST, then we will start deleting the slices.
It’s safer to consider that all slivers (VMs, dataplane services) of a slice that is touching any of these 4 sites will be deleted for this maintenance. For example, if you have a slice connecting STAR and some other sites that will continue operation (eg. PSC and GPN), you should consider that entire slice with all VMs running on STAR, PSC and GPN will be deleted.
This work (maintenance) has several complications that we cannot provide a guarantee for the health of the slices touching these 4 sites. Apologies for this inconvenience.
Hello Yoursunny,
Yes, we will have maintenance and switch upgrades/fiber work at both WASH and STAR locations, therefore FABNetv4Ext will be affected during this maintenance.
November 22, 2023 at 11:35 am in reply to: Workers with 100G SmartNICs in maintenance mode on multiple sites #6126Dear Experimenters,
We completed this maintenance, worker nodes mentioned on the sites listed on the previous message are available for experiments.
None of the VMs or reservations running on the workers with 100G ConnectX-6 SmartNICs were affected during the work.November 18, 2023 at 4:27 pm in reply to: RESOLVED:FABRIC MASS in Maintenance due to unexpected power loss at hosting site #6120Power outage is resolved. All active slivers are restored.
Please let us know if you have any issues with your existing slivers on FABRIC-MASS.November 9, 2023 at 1:31 pm in reply to: “channel 0: open failed: connect failed: No route to host” Error #6082FASTnet worker at KANS (kans-w2.fabric-testbed.net) encountered a hardware (driver) problem, all VMs on it are rebooted. All network interfaces are re-attached to the VMs. (IP address configurations may need to be reconfigured on them)
For the specific hardware problem, we will work on remedies to prevent such incidents.
Dear experimenters,
The issue is fixed, maintenance released. Testbed is back to normal operations for all services.
Dear experimenters,
Issues are resolved. FABRIC-MICH is available for experiments.
Cooling system at the datacenter is fixed without any impact on FABRIC-FIU site.
FABRIC-FIU resources are available for slices.
STAR is online.
Interfaces on node1 are re-attached with respect to the original devices.
ubuntu@node1:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:1c:38:5f brd ff:ff:ff:ff:ff:ff
inet 10.30.6.43/23 brd 10.30.7.255 scope global dynamic ens3
valid_lft 54283sec preferred_lft 54283sec
inet6 2001:400:a100:3090:f816:3eff:fe1c:385f/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86315sec preferred_lft 14315sec
inet6 fe80::f816:3eff:fe1c:385f/64 scope link
valid_lft forever preferred_lft forever
3: ens7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 02:7f:ae:44:cb:c9 brd ff:ff:ff:ff:ff:ff
4: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 02:bc:a6:3f:c7:cb brd ff:ff:ff:ff:ff:ff
inet6 fe80::bc:a6ff:fe3f:c7cb/64 scope link
valid_lft forever preferred_lft forever
5: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 06:e3:d6:00:5b:06 brd ff:ff:ff:ff:ff:ff
inet6 fe80::4e3:d6ff:fe00:5b06/64 scope link
valid_lft forever preferred_lft forever
Fengping,
I checked the VMs on this slice (I will indicate the IPs below) and all of them have a sliver key as below.
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNo … nrsc4= sliverOn the FABRIC bastion hosts, I see the following (bastion) key
ecdsa-sha2-nistp256 AAAAE2VjZ … frtHLo= bastion_You can check your ssh configuration and sliver key accordingly.
For the FABNetv6Ext network, all VMs have their IPs in place and could ping the default gateway in their subnet (2602:fcfb:1d:2::/64). These IPs are also receiving traffic from external sources, so they seemed in good health.
However, I could not ping the IP you mentioned in the peering subnet (eg. 2602:fcfb:1d:3::/64). My visibility is limited for this peering subnet and I’m not sure where they are active. I notified our network team about this.
VMs:
2001:400:a100:3090:f816:3eff:fe56:acb7
2001:400:a100:3090:f816:3eff:fe80:bfc7
2001:400:a100:3090:f816:3eff:fe9c:3e41
2001:400:a100:3090:f816:3eff:fee3:ef05
2001:400:a100:3090:f816:3eff:fe8b:deb0
2001:400:a100:3090:f816:3eff:fe8a:f1d1
2001:400:a100:3090:f816:3eff:fe1c:385f
2001:400:a100:3090:f816:3eff:feaa:161a
2001:400:a100:3090:f816:3eff:fee2:d192
2001:400:a100:3090:f816:3eff:fe31:1eebFABRIC Nat64 solution is back online.
All VMs are active following a reboot. All network services are online. We apologize for the this inconvenience. Please let us know if you have any issues on your slivers.
Hello Bruce,
Inside this VM (205.172.170.76), an ssh public key identified as “fabric@localhost” is present under user account ubuntu.
VM is accessible with SSH from public internet. With the right ssh key and ssh client configurations as described on https://learn.fabric-testbed.net/knowledge-base/logging-into-fabric-vms/ you should be able to login to the VM.On the other hand, as you mention you are suddenly unable to login to the VMs, it could be a related to a change with the SSH key inside the VM. Can you confirm the ssh key that I mentioned above is the one that’s supposed to be present in the VM?
There may be multiple issues that we will be working on.
In order to eliminate the possibility of an issue with the “outside” IP address that you’re using, can you please send your IP address?
-
AuthorPosts