Forum Replies Created
-
AuthorPosts
-
Jupyter Hub is back up and accessible. You should be able to use JH containers. GCP outage has been resolved.
Refer https://learn.fabric-testbed.net/forums/topic/out/ for more details.
Thanks,
Komal
June 12, 2025 at 6:43 pm in reply to: Outage Jupyter Hub – Kubernetes PVC Attachment Errors Due to GCP Incident #8613Update: JupyterHub Access Restored — GCP Incident Resolved
The earlier Google Cloud Platform service disruption that was affecting JupyterHub logins and volume attachments has now been fully resolved. As of now, users should be able to log in and start their JupyterHub environments normally.
The root cause of the issue was a Google Cloud Service Control incident that intermittently prevented volume attachments across multiple GCP services. Full details of the incident are available here:
đź”— GCP Incident Summary (June 12, 2025)If you continue to encounter any issues starting your environment:
-
Try restarting your server from the JupyterHub control panel.
-
If the problem persists, please feel free to reach out to us.
Thank you for your patience while this upstream issue was being addressed.
Best regards,
Komal
Notice: Kubernetes PVC Attachment Errors Due to GCP Incident (June 12, 2025)
We are aware of an ongoing issue where some users may see errors when starting their JupyterHub environments. Affected users may encounter errors similar to:
AttachVolume.Attach failed for volume "pvc-..." : rpc error: code = Internal desc = Failed to getDisk: googleapi: Error 503: Policy checks are unavailable., backendError
Root cause:
This is due to a Google Cloud Platform (GCP) service disruption that is intermittently preventing Kubernetes from attaching persistent volumes. The issue is upstream of our environment and is being actively addressed by Google (see GCP Status).What should you do:
-
If you encounter this error when launching your JupyterHub environment, no action is needed on your part.
-
In most cases, the issue is temporary and will resolve automatically as the underlying cloud services recover.
-
We recommend waiting a few minutes and then retrying.
-
Please avoid repeated restarts or resubmissions, as Kubernetes will continue to attempt recovery automatically.
We will continue to monitor the situation and will update as more information becomes available. Thank you for your patience.
Best regards,
Komal
Hi Tanay,
We are in the process of procuring them. While they may not be available for the Summer release, we are targeting an incremental release or including them in the Fall 2025 release.
Best,
Komal
Hi Rodrigo,
Could you please share your slice ID and let us know how you’re trying to access the VMs—whether through Jupyter Hub or from your local environment?
Thanks,
KomalJune 6, 2025 at 11:18 am in reply to: Guaranteed Capacity and Traffic Prioritization across the Sites #8590Hi Fatih,
Thank you for your email and detailed questions.
At this time, FABRIC does not currently support guaranteed capacity or QoS prioritization on L2P2P links. The service operates as best-effort by default, and DSCP/ToS or VLAN PCP markings are not currently enforced across the underlying infrastructure.
That said, we are actively working to support guaranteed QoS using Explicit Route Options (ERO) in the L2P2P service. This capability is planned for inclusion in our upcoming Release 1.9, targeted for deployment in late July/early August. It will provide a way to request L2P2P links with specified bandwidth guarantees and rate-limiting.
We will share more details and guidance on how to configure these options as part of the release.
Please feel free to reach out with any further questions in the meantime.
Best regards,
Komal TharejaHi Alexander,
Based on our investigation so far, the VMs from your slice that are not passing traffic were hosted on
salt-w3.fabric-testbed.net
. We’ve identified that none of the VMs on this host are able to pass traffic. As a result, we have placed this worker into Maintenance mode and are actively investigating the issue.You should be able to create a new slice without encountering this problem, as
salt-w3
is now in Maintenance and will not be used for any new slices on the SALT site.Thanks,
Komal
Hi Sourya,
MASS is undergoing maintenance from June 2 to June 4, as noted [here].
Since your slice is set to expire on June 9, it will remain unaffected by the maintenance window. As mentioned in the announcement, your VM will be recovered, and your data will persist.
Thanks,
KomalThank you Alexander for sharing this. I have shared the details with the network team. Will keep you posted.
Thanks,
Komal
Hi Philips,
At the moment, we do not support guaranteed QoS. This feature will be available soon. In the meantime, you can use tools such as
tc
to manage bandwidth on the VMs.Thanks,
KomalHi Nishant,
Please find my responses inline below:
Once a user has reserved a slice with an FPGA, that resource is locked and cannot be acquired or modified by other users until the slice is released.
You’re correct—if the FPGA has been flashed with a workflow other than the EsNet workflow, it may fail.
However, we cannot guarantee the validity or state of the bitstream that was previously flashed by another user before you acquired the slice. This may leave the FPGA in an inconsistent or unusable state. In our experience, reflashing the FPGA with a known good (golden) image typically restores it to a usable state.
We are planning to share this golden image along with the notebook with users soon, so they can perform the reflash themselves when needed. In the meantime, if you’re currently blocked, please let me know the specific site you’re working with—I’ll check whether we can assist with reflashing the FPGA for you.
Thanks,
Komal
Hi Alex,
The network team reviewed the configuration and found no issues on the switch side. However, they observed that the MAC addresses for these interfaces have not been learned by the switch.
As a next step, they recommend removing the L2Bridge service and connecting both interfaces directly to FabNetV4 to verify if the network connectivity is restored.
Please perform this change using slice modify, so the same VMs and interfaces can be reused for validation. This helps us rule out the possibility that recreating the VMs might inadvertently resolve the issue.
Refer to this notebook for guidance on how to modify the slice.
Thanks,
Komal
Could you please check your VM again?
All PCI devices had been disconnected. I have reconnected them to your VM. Please check it.
Also, could you please share the sequence of operations that lead your VM to this state?
It would be helpful to see if there is anything that needs to be fixed on our control software.
Thanks,
Komal
Please share your slice ID and also the output of the command:
ifconfig -a
Thanks,
Komal
-
-
AuthorPosts