- This topic has 1 reply, 2 voices, and was last updated 1 hour, 48 minutes ago by .
Viewing 2 posts - 1 through 2 (of 2 total)
Viewing 2 posts - 1 through 2 (of 2 total)
- You must be logged in to reply to this topic.
Home › Forums › FABRIC General Questions and Discussion › Establish communication between FPGA to GPU via PCIe
Hi,
I’m a UCSD grad student, and I was wondering if there is anyway I can make use of the Fabric Testbed setup to establish communication between FPGA to GPU (ideally by PCIe). My ideal setup would be FPGA to GPU via PCIe, and using that I can send and receive data via RDMA.
Any help would be appreciated, thanks!
Hi Paresh,
Currently, FABRIC allows users to create VMs where GPUs or FPGAs can be attached via PCI passthrough. However, direct communication between FPGA and GPU over PCIe (such as peer-to-peer DMA or RDMA transfers) is not supported.
This is because for true PCIe peer-to-peer access, both devices need to be physically located on the same host and share the same PCIe root complex or switch. At present, none of the FABRIC nodes have both a GPU and an FPGA installed on the same host.
If you’d like to double-check inventory yourself, you can list host capabilities with fablib:
from fabrictestbed_extensions.fablib.fablib import FablibManager as fablib_manager
fields = [
'name',
'fpga_sn1022_capacity', 'fpga_u280_capacity',
'rtx6000_capacity', 'tesla_t4_capacity', 'a30_capacity', 'a40_capacity'
]
fablib = fablib_manager()
output_table = fablib.list_hosts(fields=fields)
You’ll see per-host capacities for each device type. It will show that hosts with FPGA capacity don’t also list GPU capacity (and vice versa), confirming that GPU+FPGA co-location isn’t available.
Best regards,
Komal