Forum Replies Created
-
AuthorPosts
-
@Kriti – the hypervisor on wash-w3 was down this morning and was recovered. Issues on WASH should clear now. I also verified TACC is working as well. Please try your slices again and let us know if you still face errors.
@Nagmat – there was a leaked service due to timeout from TACC switch. I have cleaned up the leaked services, your slice provisioning should work as well. Please let us know if you still face errors.
December 11, 2023 at 5:06 pm in reply to: Maintenance on Network AM – 12/11/2023 (3:30pm-4:30pm EST) #6182Maintenance has been completed!
Hi Kriti,
There was an issue on
new-y2
where your VMs were being provisioned as it had some leaked VMs. We rebooted the worker node, your slices should work on NEWY. We will also check STAR and WASH as well.Thanks,
Komal
Hello,
Could you please check if the file exists at the specified path using the command:
ls /home/fabric/work/re_vit/notebooks/animal-blur-canine-551628.jpg
?Thanks,
KomalNovember 12, 2023 at 3:33 pm in reply to: Maintenance on Network AM – 11/12/2023 (3:00pm-4:00pm EST) #6089The maintenance is complete!
You are right Greg, this is totally dependent on how much memory is available on the Numa Node on the Host where your VM is launched at the current time.
@yoursunny
What happens if there are multiple components that are on distinct NUMA sockets?
If you have multiple components, we try to pin the memory for the VM to both the Numa Nodes.Example: your VM has a ConnectX-5 and GPU both on different sockets, invoking numa_tune would pin the memory to both the sockets provided that the combined available memory on both the sockets >= requested VM RAM.
Is it possible to specify how much RAM to pin to each NUMA socket?
In the current version, this is not supported. We may be limited on this by the underlying OS API as well. But we would explore to improve on this.If we pin a CPU core or certain amount of RAM onto a NUMA socket, does it prevent other VMs from using the same CPU core or RAM capacity?
Yes, if you have pinned CPUs/Memory to a specific NUMA socket, other VMs cannot use the same cores/memory on that socket.For CPU pinning, you can explicitly specify how many cores to pin to a Numa Node.
Thanks,
Komal1 user thanked author for this post.
No, having lesser memory requested would have better chances or deploying on a relatively less used site would give better success. I checked on the portal GPN seems to be very sparsely used. Please consider requesting the VM there and try with 32G ram.
Upper limit for a VM connected with only one component would map to a single Numa Node. Max limit on memory for a numa node is 64G so exceeding that limit would not work.
Adding more flexibility to this API would help alleviate this issue. Will definitely work on that and keep you updated once that is available.
Thanks,
KomalHello Nirmala,
Could you please share your Slice ID to help debug this issue?
Also, could you please provide more details as to what you mean by reservation is not completed? Does the reservation stay in ticketed state or time out. Does it become Active?Thanks,
KomalHello Kriti,
Could you please share your Slice ID to help debug this issue?
Thanks,
KomalHello Greg,
node.numa_tune()
tries to pin the memory for a VM to the Numa Nodes belonging to the components attached to the VM in the current implementation.Looking at the sliver details, this sliver has 64G memory allocated. The Numa node for the component attached to this VM is Node 1. In our topology we have 8 Numa Nodes per worker and each is allocated 64G memory. The error message above implies that the requested memory in the above case (64G) is not available on the Numa Node(1) and hence the VM’s memory cannot be pinned to Node(1).
sliver_id: '0764c99c-0e76-4aaa-94de-c291bd2b23f0',
'name': 'compute1-ATLA'
'capacities': '{ core: 16 , ram: 64 G, disk: 500 G}',
'capacity_allocations': '{ core: 16 , ram: 64 G, disk: 500 G}'
Also, in the current version, the API doesn’t allow to pin only a percentage of the memory to the numa node. We will work on adding that capability to serve the memory request better in the next release. Appreciate your feedback!
Thanks,
Komal
October 24, 2023 at 7:29 pm in reply to: Maintenance on FABRIC-Network AM – 10/24/2022 (6:00pm-7:00pm EST) #5901Topology update is completed and the maintenance has been lifted.
Thanks,
Komal
Thank you for reporting this issue Manas!
Node2 (STAR) and Node7 (UCSD) are in closed state and hence they do not have a management IP.
Both these nodes failed to provision with the error:
Last ticket update: Redeem/Ticket timeout
.Currently investigating it, will keep you posted with our findings.
Also, in the mean while, could you please share your notebook? Just trying to see if we can reproduce this consistently with your notebook. Haven’t been successful in recreating this problem.
Appreciate your help in making the testbed better!
Thanks,
Komal
Good morning Bruce,
Thank you for sharing your observations. VM provisioning to the worker (
mass-w1.fabric-testbed.net
) to which the VM was allocated is not working. We are working to resolve it. In the meanwhile, Please consider creating a slice on a different site or a different worker.Node can be requested on a specific worker by passing in the host field as below:
node1 = slice.add_node(name="Node1", cores=16, ram=32, site="MASS", image='docker_rocky_8', host="mass-w2.fabric-testbed.net")
Thanks,
Komal
This looks like a bug and would address this. In the meanwhile, I had modified your script to make it run with the latest fablib to get past this issue. Sharing it here, please note the changes around setting interface mode to auto which lets fablib configure IP addresses and also running post_boot_config() this ensures instantiated is set.
- This reply was modified 1 year, 4 months ago by Komal Thareja.
-
AuthorPosts