AIStor Runs Natively Inside NVIDIA BlueField-4: Object Data at Wire Speed
NVIDIA BlueField-4 puts storage in the AI factory.
Modern AI applications don't have only a compute problem. They also have a storage problem. Agentic workflows, long-context inference, and multi-model pipelines generate state in the form of checkpoints, embeddings, KV cache, and intermediate activations that have to move fast, live close to the GPU, and scale without adding operational weight. The data storage either keeps up or becomes the bottleneck. Most of the industry is still catching up to that reality.
We saw it coming.
Built for Embedded Systems. Optimized for ARM.
A single static binary with all the features, including the web management console. Just under 200MB. No metadata databases. No background reconciliation services. No assumptions about available compute or memory headroom. These weren't accidents of good engineering. They were deliberate choices, made early, because we knew storage software that couldn't fit inside the infrastructure it serves would eventually be in the way of it.
We started optimizing for ARM when most storage vendors weren't paying attention. We built SVE-optimized erasure coding and integrity checking because we understood where the silicon was going and we wanted to be there first. Our Reed-Solomon implementation using ARM's Scalable Vector Extension doubled throughput compared to NEON. Highway Hash scales linearly with core count until it hits memory bandwidth limits. This is what it looks like to build storage for the hardware that's actually coming, not the hardware that was comfortable.
That's why AIStor runs natively inside the NVIDIA BlueField-4 processor. Not because we adapted to the opportunity, but because we engineered toward it.
The Right Platform Changes Everything.
The AI factory has different requirements than a data center built around x64 servers, and NVIDIA BlueField-4 reflects that reality. It combines 800 Gb/s networking with an NVIDIA Vera CPU, 128 GB of LPDDR5‑class memory—four times that of BlueField‑3—and PCIe Gen6 bandwidth to keep up with 800 GbE. Delivering six times the compute of its predecessor, NVIDIA BlueField‑4 is dedicated AI infrastructure silicon in a way traditional x64 host CPUs never were.”
This is exactly the kind of platform modern AI applications demand. Agentic workflows, long-context inference, multi-model pipelines. These aren't forgiving workloads. They need storage that lives in the data path, speaks the protocols they already use, and gets out of the way at wire speed. Filesystem gateways and x86 storage servers bolted onto the side of the fabric don't cut it anymore.
MinIO AIStor is built for this. AIStor is a data store for S3 objects, Iceberg tables and AI memory, running directly on the NVIDIA BlueField-4 ARM cores without a dedicated storage server or translation layer. The same AIStor binary that powers exabyte-scale deployments across the Fortune 500, now running inside the infrastructure itself in AI factories that are getting built, because that's where storage needs to be.
What This Means in Practice
When AIStor runs inside NVIDIA BlueField-4, S3 requests arrive over RDMA and bypass the kernel network stack entirely. Data moves from NVMe flash to GPU memory without touching a host CPU. The storage "server" is not a server. It's a function of the network fabric.
That's not a feature. That's a different way of thinking about where storage lives in AI infrastructure. For teams building agentic AI systems, inference pipelines, and multi-model workflows, a storage tier that speaks S3 natively and lives in the data path changes the architecture of what's possible.

No separate metadata tier. No background consistency processes. No x86 servers dedicated to storage. The complexity doesn't get hidden. It gets eliminated.
What's Coming
The ARM and SVE optimizations are production-ready today on NVIDIA BlueField-3 and other ARM platforms. NVIDIA GPUDirect RDMA for S3-Compatible storage is available now as a tech preview for customers who want to get ahead of the curve. AIStor on NVIDIA BlueField-4 reaches general availability in the second half of 2026, aligned with NVIDIA BlueField-4's broader market rollout.
But the point isn't the timeline. The point is that when the most important infrastructure platform in AI comes to market, AIStor is ready for it, because we've been building toward this for years. The architecture is lean by design. The ARM work is done. The S3 semantics are native. There's no scramble to retrofit, no protocol gap to paper over, no emergency roadmap item.
This is what it looks like when preparation meets the right moment. If you're building AI infrastructure and you want storage that belongs in the data path, not beside it, this is where you start.
Resources
Press Release: MinIO AIStor Brings Object Data Stores for the NVIDIA STX Reference Architecture
Whitepaper: MinIO AIStor: The Unified Data Foundation for the NVIDIA AI Factory
Blog: Benchmarking Vector Index Creation with MinIO AIStor, Milvus, and NVIDIA cuVS


.avif)
