• Intel vroc ready and nvme raid support on amd ryzen threadripper
  • New two phase power solution with upto 14w output
  • Supports four additional nvme m.2 drives using intel vroc for transfer speeds upto 128gbps
  • Pci express 3.0 x16 interface, compatible with pci express x8 and x16 slots
  • Stylish heatsink and integrated blower style fan prevent M.2 throttling

Product does what it indicates very well. V2 adds a second power phase if you were wondering. I have both versions and they work well. You just need to know if your motherboard supports X4X4X4X4 or X4X4 bifurcation for four or two drives respectively. This will most likely be challenging when using a dedicated graphics card with a desktop setup. If using onboard video you should be fine, or use it in a HEDS X399/X299 setup where you have the PCIE lanes available.

I purchased this for a Threadripper build I recently completed. Using the Hyper M.2 add in card, I was able to add 3x 1TB Samsung 970 EVO nvme SSDs. Installation was a breeze, and the card came with the necessary mounting hardware. This card was used to use the 970 EVOs as storage drive in Raid 0. Yes, very overkill. Nevertheless, I cannot say how happy I am of the outcome enough.

I honestly don't know how ASUS is making money on this. The aluminum heatsink is substantial and comes pre-applied with thermal pads that you just have to remove the protective film from. I'm honestly thinking about buying another since it seems too good to be true. This has to be the doing of some small internal team at ASUS that hasn't been noticed yet since the value here on this card is so good. It kept my two 1TB 970 EVO Plus drives below 40c during 2+ hour RAID conversion I did while upgrading. Sustained reads were bottle necked by my 4GHz OC Threadripper 1950x not being able to keep up with transfers so I'm sure it could go faster but I saw a peak read of over 3GB/s when moving my VM ISOs over. Well done ASUS.

Works well. I am using it as a Steam library and documents folder. The 4K randoms are almost as high as single 970 Evo plus NVMe, and from there on it is nothing but high throughput all the way to 8GB/s for large file swaps. For the best 4K randoms though, leave as stand alone drives.

I thought this card was too good to be true. I needed a mechanism to stitch 4 NVME drives together for a software raid and this seemed to fit the bill. The package comes with the card, pre-applied thermal pads, and the M.2 screws needed to fasten the drives. After initial bootup into UEFI, surely enough, only 1 out of the 4 drives was detected. This was due to the the fact that PCIe slot on my motherboard was set to x16. With bifurcation supported on the X399 Taichi, I was able to set the slot to 4x4x4x4. After saving and rebooting, all four now showed up. If you intend on buying this, understand that you can't stick this into any motherboard/CPU combo and expect good results. I suspect a lot of the one stars you see here are a result of this. Do your research first!

This thing is great. There is an Asrock version, but it requires plugging in a PCIe power plug. This one from ASUS doesn't. Just remember, your usage require available PCIe lanes and the ability to set the PCIe lanes to bifurcation in the BIOS. If you only have an X8 slot or 8 lanes of PCIe available, you can only use two NVMe drives. If you have 16 lanes available then you can use all 4 drives. With my Threadripper, I placed my RTX 2060 in a X8 slot. I'm not a gamer, but it still pushes out to my 3 4K panels in X8. I then have 11 NVMe drives in adapters and the 3 more in the motherboard M.2 slots. 14 total NVME drives with 17.5 TB of space. If they were all 2TB drives, then it would be 28 TB of space. As you can see from the image, no SSD's or mechanical drives. And the adapters don't hinder or slow down the drives. I've done Crystal Mark on drives in the MOBO slots and they are the same as the adapter slots.

Took a little bit of trial and error but this card is finally working on a Gigabyte X570 Master motherboard with PCIe gen 4 nvme SSDs. It is important to understand how the motherboard shares the PCIe lanes going to the CPU. This particular motherboard only has 16 lanes which it shares across the first two PCIe slots. So with this card installed alone, all 4 PCIe gen 4 nvme SSDs are available. With a GPU installed in the other slot the Hyper M.2 only has 8 PCIe lanes available so only 2 nvme SSDs are usable. The fun part is that the card does not interfere with the PCIe gen 4 speeds. I raided the two usable SSDs in raid 0 and the results are ridiculous. 9.6 GB/s read & 8.2 GB/s write. See the benchmark screenshot for details.

i installed 1 stick of a Samsung 970 EVO Plus SSD 2TB - M.2 NVMe on an EVGA x99 FTW K mobo from a 2016 build and i am simply amazed on how easy it was! straight plug and play, no need for driver installation win 10 pro recognized it on first boot! people who are having problems either don't understand the technology or have little experience on how to set up raid and from what i learned not all mobos are compatible with a raid build on this card. UPDATE: i purchased a 2nd NVMe stick and sad to say the X99 chip mobo will only recognize 1 stick and no more, this is due to the fact that this expansion card is for x299 chip sets and above, i was lucky to even get 1x stick to work i would like to note that i modded the board with some extra raspberry pi mini heat sinks and a using thermal pads to transfer heat from the controller chip into the main heat sink, my temps are 34c while playing games which is incredible after hearing some people getting 60c, 76c even 99c while running games off their M2 stick

This works perfectly in my MSI MEG x399 Creation system; repalcing the 2-slot with a huge/loud fan solution that MSI packaged with the system board. This is a much sleeker solution; the blower isn't terribly loud (especially by comparison). Accepts up to 110mm M.2 sticks, I have 4x 1TB 970 Evo installed and run it in RAID-0. I can clock 10GB/s reads/writes under the right circumstances...seems to handle heat a bit better; but I can have the fan on with this vs the MSI solution where I had to turn the fan off; so it makes sense there...

I didn't know this until I bought this card, but the amount of lanes available is fixed to your CPU. If you use a GPU already, and Optane, which I do, that's 16 PCI-E lanes for the GPU and 4 PCI-E lanes for Optane. I did have a Ryzen 2600 CPU which only has 16 lanes, meaning I had unknowingly already made my GPU use 8 lanes, instead of the full 16. So when I added this card, and it requires 4 lanes per NVMe drive, I could add one more, but not all four. After much research, I learned more about PCI-E lanes. So, if you're considering this, just make sure you go to CPU-World.com and find out how many PCI-E lanes your CPU has, and make sure you have enough for all the devices you want to use in your computer. I switched to a Threadripper 1900x which has 64 lanes. 16(GPU) + 4(Optane) + 16(4x NVMe Drives) + 4x(2TB NVMe OS) = 40 total lanes consumed therefore I have 24 left for other devices. Learn from my mistakes. To use RAID on the drives, you can select the drives in Windows Disk Management, and set them as a stripped volume group. Stripped meaning RAID 0.