Forum Replies Created
-
AuthorPosts
-
Hello Nirmala,
Could you please share your Slice ID to help debug this issue?
Also, could you please provide more details as to what you mean by reservation is not completed? Does the reservation stay in ticketed state or time out. Does it become Active?Thanks,
KomalHello Kriti,
Could you please share your Slice ID to help debug this issue?
Thanks,
KomalHello Greg,
node.numa_tune()
tries to pin the memory for a VM to the Numa Nodes belonging to the components attached to the VM in the current implementation.Looking at the sliver details, this sliver has 64G memory allocated. The Numa node for the component attached to this VM is Node 1. In our topology we have 8 Numa Nodes per worker and each is allocated 64G memory. The error message above implies that the requested memory in the above case (64G) is not available on the Numa Node(1) and hence the VM’s memory cannot be pinned to Node(1).
sliver_id: '0764c99c-0e76-4aaa-94de-c291bd2b23f0',
'name': 'compute1-ATLA'
'capacities': '{ core: 16 , ram: 64 G, disk: 500 G}',
'capacity_allocations': '{ core: 16 , ram: 64 G, disk: 500 G}'
Also, in the current version, the API doesn’t allow to pin only a percentage of the memory to the numa node. We will work on adding that capability to serve the memory request better in the next release. Appreciate your feedback!
Thanks,
Komal
October 24, 2023 at 7:29 pm in reply to: Maintenance on FABRIC-Network AM – 10/24/2022 (6:00pm-7:00pm EST) #5901Topology update is completed and the maintenance has been lifted.
Thanks,
Komal
Thank you for reporting this issue Manas!
Node2 (STAR) and Node7 (UCSD) are in closed state and hence they do not have a management IP.
Both these nodes failed to provision with the error:
Last ticket update: Redeem/Ticket timeout
.Currently investigating it, will keep you posted with our findings.
Also, in the mean while, could you please share your notebook? Just trying to see if we can reproduce this consistently with your notebook. Haven’t been successful in recreating this problem.
Appreciate your help in making the testbed better!
Thanks,
Komal
Good morning Bruce,
Thank you for sharing your observations. VM provisioning to the worker (
mass-w1.fabric-testbed.net
) to which the VM was allocated is not working. We are working to resolve it. In the meanwhile, Please consider creating a slice on a different site or a different worker.Node can be requested on a specific worker by passing in the host field as below:
node1 = slice.add_node(name="Node1", cores=16, ram=32, site="MASS", image='docker_rocky_8', host="mass-w2.fabric-testbed.net")
Thanks,
Komal
This looks like a bug and would address this. In the meanwhile, I had modified your script to make it run with the latest fablib to get past this issue. Sharing it here, please note the changes around setting interface mode to auto which lets fablib configure IP addresses and also running post_boot_config() this ensures instantiated is set.
- This reply was modified 1 year, 2 months ago by Komal Thareja.
Thank you for sharing your observations! I created a Fabnet slice via JH and was not able to reproduce this problem.
Could you please share the generated graphml file from the following the code?
slice = fablib.get_slice(slice_id="7e6a41e5-ab21-4cc5-9582-4c37e54dc2d8")
slice.get_fim_topology().serialize(file_name="fabnet-slice.graphml")
I can confirm that the slivers for your slices do have Network and Gateway assigned. Enclosing snapshot for slice:
7e6a41e5-ab21-4cc5-9582-4c37e54dc2d8
Reservation ID: 4c9b702b-1346-4fe5-b61e-f5cb7790e75f Slice ID: 7e6a41e5-ab21-4cc5-9582-4c37e54dc2d8
Resource Type: FABNetv4 Notices: Reservation 4c9b702b-1346-4fe5-b61e-f5cb7790e75f (Slice mtu@AMST(7e6a41e5-ab21-4cc5-9582-4c37e54dc2d8) Graph Id:98452967-6246-4517-a030-7d76d7044d05 Owner:shijunxiao@arizona.edu) is in state (Active,None_)
Start: 2023-09-22 19:56:48 +0000 End: 2023-09-23 19:56:47 +0000 Requested End: 2023-09-23 19:56:47 +0000
Units: 1 State: Active Pending State: None_
Predecessors
1f8599bc-68cb-450a-9c49-962d2f5a5b4f
Sliver: {'node_id': 'be2e2e72-5bdd-4301-98aa-bb9e3fe23a56', 'gateway': 'IPv4 subnet: 10.145.7.0/24 GW: 10.145.7.1', 'layer': 'L3', 'name': 'net4', 'node_map': "('bbf6a0a7-8981-4613-b797-0960e7e8ea9d', 'node+amst-data-sw:ip+192.168.42.3-ipv4-ns')", 'reservation_info': '{"error_message": "", "reservation_id": "4c9b702b-1346-4fe5-b61e-f5cb7790e75f", "reservation_state": "Active"}', 'site': 'AMST', 'type': 'FABNetv4', 'user_data': '{"fablib_data": {"instantiated": "False", "mode": "manual"}}'}
IFS: {'node_id': '115b71f3-5369-490a-a6cc-2d16db3cc8f0', 'capacities': '{ unit: 1 }', 'label_allocations': '{ bdf: 0000:e2:0d.1, mac: 0E:4F:18:21:9F:35, ipv4: 10.145.7.2, vlan: 2103, local_name: HundredGigE0/0/0/9, device_name: amst-data-sw}', 'labels': '{ bdf: 0000:e2:0d.1, mac: 0E:4F:18:21:9F:35, ipv4: 10.145.7.2, vlan: 2103, local_name: HundredGigE0/0/0/9, device_name: amst-data-sw}', 'name': 'node-node-nic0-p1', 'node_map': "('bbf6a0a7-8981-4613-b797-0960e7e8ea9d', 'port+amst-data-sw:HundredGigE0/0/0/9')", 'type': 'ServicePort'}Reservation ID: d88639a0-3062-43d7-83ed-ccbac797ef29 Slice ID: 7e6a41e5-ab21-4cc5-9582-4c37e54dc2d8
Resource Type: FABNetv6 Notices: Reservation d88639a0-3062-43d7-83ed-ccbac797ef29 (Slice mtu@AMST(7e6a41e5-ab21-4cc5-9582-4c37e54dc2d8) Graph Id:98452967-6246-4517-a030-7d76d7044d05 Owner:shijunxiao@arizona.edu) is in state (Active,None_)
Start: 2023-09-22 19:56:48 +0000 End: 2023-09-23 19:56:47 +0000 Requested End: 2023-09-23 19:56:47 +0000
Units: 1 State: Active Pending State: None_
Predecessors
1f8599bc-68cb-450a-9c49-962d2f5a5b4f
Sliver: {'node_id': '8a23d2b8-af6e-4a60-a34a-a9c913b21f30', 'gateway': 'IPv6: 2602:fcfb:1f:2::/64 GW: 2602:fcfb:1f:2::1', 'layer': 'L3', 'name': 'net6', 'node_map': "('bbf6a0a7-8981-4613-b797-0960e7e8ea9d', 'node+amst-data-sw:ip+192.168.42.3-ipv6-ns')", 'reservation_info': '{"error_message": "", "reservation_id": "d88639a0-3062-43d7-83ed-ccbac797ef29", "reservation_state": "Active"}', 'site': 'AMST', 'type': 'FABNetv6', 'user_data': '{"fablib_data": {"instantiated": "False", "mode": "manual"}}'}
IFS: {'node_id': 'b9f78791-6ef0-4e81-9fee-b081a9485676', 'capacities': '{ unit: 1 }', 'label_allocations': '{ bdf: 0000:e2:08.6, mac: 0A:B1:A5:3F:0F:02, ipv6: 2602:fcfb:1f:2::2, vlan: 2068, local_name: HundredGigE0/0/0/9, device_name: amst-data-sw}', 'labels': '{ bdf: 0000:e2:08.6, mac: 0A:B1:A5:3F:0F:02, ipv6: 2602:fcfb:1f:2::2, vlan: 2068, local_name: HundredGigE0/0/0/9, device_name: amst-data-sw}', 'name': 'node-node-nic1-p1', 'node_map': "('bbf6a0a7-8981-4613-b797-0960e7e8ea9d', 'port+amst-data-sw:HundredGigE0/0/0/9')", 'type': 'ServicePort'}Thanks,
Komal
Hi Fraida,
Thank you for reporting this issue. There was a leaked network service. I have cleared it up. Please try your slice again and it should work. Regarding the Slice state, it is a bug and we working on it.
Thanks,
Komal
August 23, 2023 at 9:25 pm in reply to: Maintenance on FABRIC-Network AM – 08/23/2022 (9:00pm-10:00pm EST) #5099The maintenance is complete. Testbed is open for use!
Thanks Gregory for reporting this. It was indeed a bug when running in script mode. The fix is now available on the main branch for https://github.com/fabric-testbed/fabrictestbed-extensions. The fix is also available in the “Beyond Bleeding Edge Container”
Once the new version for
fabrictestbed-extensions==1.5.4
is pushed to pypi it would be available there as well.Thanks,
KomalThank you for sharing your slice details! Are you running this as a script?
The difference in behavior could be because of the wait handling in Jupyter Notebook v/s a script.
ModifyOK
state indicates that the modify was successful.
You can invoke:slice.update()
to move the slice intoStableOK
state.Connectivity should work as well. Also, could you please share the script you are running so I can emulate this and address the difference in the behavior of the wait to avoid invoking explicit
update
.Thanks,
KomalRegarding SSH problem, if the
Configure Environment
Notebook was ran again assuming the same file name was used for the sliver keys, the sliver private/public keys got overwritten. So the SSH keys in~/work/fabric_config/
are now different than what is in the VM and thus resulting in SSH failures.You can verify by
ls -ltr ~/work/fabric_config
Thanks,
KomalHi Gregory,
Which container are you using? I tried the Ipv4Ext notebook on the “Beyond Bleeding Edge” IPv6Ext from 1.5.3 container and did not see this issue. Also, if you run into this issue, please do not delete the slice. It will help to get the information for the slice for debugging.
Also, please share the output of the following command from your container:
pip list | grep fabric
cat ~/work/fabric_config/requirements.txt
Thanks,
Komal- This reply was modified 1 year, 3 months ago by Komal Thareja.
Also, please check that the correct Project Id is passed in your configure environment notebook.
-
AuthorPosts