The NVMe Advantage
If your FPGA application needs non-volatile storage, NVMe is the best solution, hands down, here’s why.

No IP costs.
PCIe SSDs interface with the PCIe blocks integrated into your FPGA, so there is no need for expensive SAS and SATA IP.
Faster than SATA.
The 4-lane PCIe interface has a higher bandwidth than SATA and the NVMe protocol stack has much lower latency.
Linux support.
All major Linux distributions have NVMe driver in-box support (including PetaLinux).
Features
M.2 SSD
sockets
For connection with standard M.2 form factor PCI Express SSDs
HPC FMC
Connector
For connection with FPGA development boards
100MHz
Oscillators
Provide the reference clocks for both FPGA and SSDs
Example
designs
Get up-and-running as soon as possible
Product description

Power supply
All power to the FPGA Drive FMC is supplied through the carrier’s FMC connector. The FPGA Drive FMC makes use of the FMC’s 3.3VDC supply to power one of the SSDs, and it has a switching regulator to power the other SSD using the FMC’s 12VDC supply.
The adjustable voltage supply (VADJ), which is supplied by all standard FMC carriers, can be set to any voltage between 1.8V and 3.3V. The FPGA Drive FMC’s onboard FRU EEPROM specifies a 1.8V VADJ for all carriers that have a power management system.
Interface with FPGA
M.2 sockets
100MHz Clock Oscillators
Dimensions
The FPGA Drive adapter is 69mm x 141.5mm. FPGA Drive FMC does not conform to the mechanical specifications of the Vita 57.1 standard due to the length of M.2 modules.
FPGA Drive FMC Rev-B Mech drawing (version for 1x SSD – phased out) FPGA Drive FMC Rev-D Mech drawing (version for 2x SSDs)
Compatibility
Schematics
Carrier Compatibility Table
Carrier | First SSD | Second SSD |
---|---|---|
PicoZed 7015 and PicoZed FMC Carrier Card V2 | LPC: 1-lane, Gen1 | Not supported |
PicoZed 7030 and PicoZed FMC Carrier Card V2 | LPC: 1-lane, Gen2 | Not supported |
KC705 | HPC: 4-lanes Gen2 LPC: 1-lane Gen2 |
HPC: Not supported LPC: Not supported |
VC707 | HPC1: 4-lanes Gen2 HPC2: 4-lanes Gen2 |
HPC1: 4-lanes Gen2 HPC2: 4-lanes Gen2 |
VC709 | HPC: 4-lanes Gen3 | HPC: 4-lanes Gen3 |
ZC702 | Not supported (no GTX) | Not supported (no GTX) |
ZC706 | HPC: 4-lanes Gen2 LPC: 1-lane Gen2 |
Not supported (Zynq-7000 devices only have 1 PCIe block) |
KCU105 | HPC: 4-lanes Gen3 LPC: 1-lane Gen3 |
HPC: 4-lanes Gen3 LPC: Not supported |
VCU108 (no example design provided) | HPC0: 4-lanes Gen3 HPC1: 4-lanes Gen3 |
HPC0: 4-lanes Gen3 HPC1: 4-lanes Gen3 |
VCU118 (no example design provided) | HPC: Not supported FMC+: 4-lanes Gen3 (see note 4) |
HPC: Not supported FMC+: 4-lanes Gen3 (see note 4) |
ZCU102 (no example design provided) | HPC0: 4-lanes Gen3 HPC1: 4-lanes Gen3 (using soft IP) |
HPC0: 4-lanes Gen3 HPC1: 4-lanes Gen3 (using soft IP) |
ZCU106 | HPC0: 4-lanes Gen3 HPC1: 1-lane Gen3 |
HPC0: 4-lanes Gen3 HPC1: Not supported |
ZCU111 | FMC+: 4-lanes Gen3 (see note 4) |
FMC+: 4-lanes Gen3 (see note 4) |
TEBF0808-04 (no example design provided) | HPC: 4-lanes Gen3 (using soft IP) |
HPC: 4-lanes Gen3 (using soft IP) |
- FPGA Drive is not compatible with the ZC702 board as it does not have any gigabit transceivers.
- VCU118 HPC FMC connector does not have any gigabit transceivers connected to it, and thus cannot support the FPGA Drive FMC.
- FPGA Drive FMC is compatible with ZCU102 board, however it must connect to a soft PCIe IP and we do not provide any example design for this at the present time.
- For compatibility with FMC+ connectors, FPGA Drive FMC must be used with an FMC extender such as AES-FMC-EXT-G from Avnet.
FMC Transceiver Assignments
The table below outlines the assignment of the FMC transceivers (DP0 to DP7) to the 2x SSDs. Each SSD is connected to 4x gigabit transceivers. The first SSD (SSD1) connects to transceivers DP0-DP3, while the second SSD (SSD2) connects to transceivers DP4-DP7. Note that all LPC connectors have a maximum of 1x gigabit transceiver (DP0), therefore they can only support a 1-lane PCIe connection to SSD1, and they cannot support SSD2. Also note that not all carriers with HPC connectors have all the transceivers connected.
FMC Gigabit Transceiver | SSD1 PCIe Lanes | SSD2 PCIe Lanes |
---|---|---|
DP0 | Lane 0 | |
DP1 | Lane 1 | |
DP2 | Lane 2 | |
DP3 | Lane 3 | |
DP4 | Lane 0 | |
DP5 | Lane 1 | |
DP6 | Lane 2 | |
DP7 | Lane 3 |
Example designs

Single SSD designs

Dual SSD designs
Our Github repo contains example designs for these FPGA/MPSoC evaluation boards.
Target board | Single SSD design | Dual SSD design |
---|---|---|
PicoZed FMC Carrier Card V2 with PicoZed 7015/30 | LPC: Yes | Not supported |
KC705 Evaluation board | HPC: Yes LPC: Yes |
Not supported |
KCU105 Evaluation board | HPC: Yes LPC: Yes |
HPC: Yes LPC: Not supported |
VC707 Evaluation board | HPC1: Yes HPC2: Yes |
Coming soon for HPC1 and HPC2 |
VC709 Evaluation board | HPC: Yes | Coming soon |
ZC706 Evaluation board | HPC: Yes LPC: Yes |
Not supported (Zynq-7000 devices only have 1x PCIe block) |
ZCU106 Evaluation board | HPC0: Yes HPC1: Yes |
HPC0: Yes HPC1: Not supported |
If you are using the older version (Rev-B) of FPGA Drive FMC with only one M.2 connector, then you will only be able to use the single SSD designs.
How-to Videos
Hardware Installation Guide
Loopback Testing with IBERT
Part 1: Hardware setup
How to attach the M.2 loopback modules and prepare your hardware for the IBERT loopback test.
Part 2: Using IBERT in Vivado
Download the pre-built IBERT bitstream:
Try this: disable the DFE (decision feedback equalizer) when doing a 2D eye scan – the gigabit traces on the FPGA Drive FMC have very low losses, so the performance is generally better without the DFE.
Part 3: Generate your own IBERT
How to generate an IBERT bitstream for your own hardware if you don’t find a pre-built bitstream listed above.
