Forum Replies Created
-
AuthorPosts
-
January 8, 2024 at 2:43 pm in reply to: How to access the files from my older username on the same project? #6256
Nagmat,
Do you mean you can’t access JupyterHub at all with your new account? Or that you want to access your old files from your new account?
This has been fixed. Thanks for reporting it. Please try again.
Paul
My first thought is that something in the slice is going to fail and increasing the timeout is not going to help. I could be wrong but lets try a test with several of the submit tasks separated.
Try something like the following. If you make each of these steps in a separate notebook cell you can re-run the wait calls as many times as necessary. The
slice.wait(progress=True)
call may time out for you. Just re-run it as many times as you need.slice.submit(progress=False) slice.wait(progress=True) slice.wait_ssh(progress=True) slice.post_boot_config()
Let me know if this works.
Paul
I was experiencing the same problem. It should be fixed now.
It was related to the damaged machines at starlight. All of the ext services should now be moved to the WASH site.
Paul
Nishanth,
Yeah, I think you just need to call slice.submit() after you make the changes. When you make changes like in fablib, it is just building the request. In order to commit the changes, you need to submit the modified request.
Paul
August 29, 2023 at 3:46 pm in reply to: What is the Maximum throughput achieved in Fabric Testbed? #5183Nagmat,
Try looking the iPerf3 examples in the JupyterHub.
The basic version should get at least 30 Gbps. (https://github.com/fabric-testbed/jupyter-examples/blob/main/fabric_examples/complex_recipes/iPerf3/iperf3.ipynb)
If you want to get closer to 100Gbps you will need to look at the NUMA tuning examples (https://github.com/fabric-testbed/jupyter-examples/blob/main/fabric_examples/complex_recipes/iPerf3/iperf3_optimized.ipynb)
Note that a few sites are still connected with slower bandwidth links but they are all capped at at least 10 Gbps. 10 Gbps should be attainable with minimal tuning required.
Paul
I looks like the issue is outside of our rack at WASH and has to do with a hardware failure with the provider that we are peering with. The estimate is that it will be fixed today but this depends on the new hardware arriving in time.
This has to do with an issue we are having with the dataplane switch at WASH (i.e. where the peering with the Internet is made). I am also waiting for this issue to be resolved. I’ll see if someone has an estimate of when this will be fixed…
That node failed. Note the error message in the table.
This is a transient error we have see this before. Komal will need to look at it.
thanks,
Paul
Be careful about relying on this too much. Users should not be editing this data and we reserve the right to change the fablib data format if needed. I’ve noted the issue you are having an intend to create a function that will update this for you without you needing to edit the data directly.
May 12, 2023 at 9:29 am in reply to: getting error 403 : Forbidden while loggin into jupyterHub #4227The browser cookies for the FABRIC JupyterHub can be set to remember which identity provider to use. If this is set, it will never re-ask you which provider to use. If the identity provider is set to “Google” and you are logged into a Google account in your browser, it will automatically try to use your Google identity with the FABRIC JupyterHub.
I just checked and see that you have a couple half created accounts that tried to use gmail addresses. Your problem might be that you tried to log into the FABRIC JupyterHub with a Google account and checked the box to remember the identity provider. If this is the case, you might be getting the 403 error because your Google account does not have permission to use the FABRIC JuptyerHub.
You should to continue to use the account associated with your institutional identity. However, this might require you to delete the browser cookie associated with the FABRIC JupyterHub. Try logging in with a private/incognito window. If that works, then deleting your cookie should solve your problem.
Ah, yes. The problem with the old version was that it used ssh to get the actual device name. This took a lot of time to print the tables and things like that. We added “fablib_data” in each fablib object to store some info like that. In general, you shouldn’t need to touch the fablib_data but if you are manually updating the names, the fablib_data will need to be updated.
Try the following:
for iface in slice.get_interfaces():
fablib_dict = iface.get_fablib_data()
// set fablib_dict fields
iface.set_fablib_data(fablib_dict):
mylisce.submit() //to update the persistent fablib data
- This reply was modified 1 year, 6 months ago by Paul Ruth.
- This reply was modified 1 year, 6 months ago by Paul Ruth.
- This reply was modified 1 year, 6 months ago by Paul Ruth.
@Fengping
This seems like an issue with the policy based routing. Is there a way to make your setup more resilient to external factors? The management network connects to the local campus or provider. I suspect we can’t control everything on the management network.
Paul
Komal pushed an update to that beta notebook. Check here: https://github.com/fabric-testbed/jupyter-examples/tree/master/fabric_examples/beta_functionality/rel1.4
I think the change in actually in the “plugins.py” file.
Try doing the following at the begining of that cell. I think you just need to pull a full/new copy of the slice before you modify it.
slice=fablib.get_slice(<your_slice_name>)
-
AuthorPosts