Hammerspace drives ‘em “crazy” with global namespace, performance and GPU data placement 

Hammerspace unifies storage across vendors into one high-performance namespace, cutting costs and feeding AI/GPU workloads at speed—leaving traditional storage players frustrated

Hammerspace is driving other storage vendors crazy. That’s because it can make pools of storage from capacity anywhere in the infrastructure and save customers money while it does so, and all at blistering speeds. 

It can also do something mainstream storage can’t do, which is to direct data to where it is needed in AI infrastructure to make sure GPUs get what they need as they need it.

That’s according to Molly Presley, SVP of global marketing for Hammerspace, who spoke at the Technology Live! event in London this week.

Hammerspace goes beyond what most storage vendors can do – despite their best efforts at becoming data management providers – because it is storage-agnostic and can pool storage from file and object systems into a global namespace.

University win

One big spin-off from that capability is to save customers money.

One example announced this week was the Advanced Computing Center for Research and Education (ACCRE) at Vanderbilt University in the US, which had run out of HPC storage capacity for its researchers, but which could have been unlocked with the right tool. Hammerspace was that tool. 

Presley said, “They had capacity in other storage systems, but they weren’t the ones designed for the researchers. The university said, we have storage so we’re not going to let you buy any more.”

“So they came and found HammerSpace, and now all that capacity is available.”

This makes use of Hammerspace’s so-called Tier 0, in which storage is pooled into a high performance layer and makes previously inaccessible capacity available. The move allowed ACCRE to reduce its storage costs by around 48%. 

Bringing speed

What infuriates storage players, said Presely, is not just that Hammerspace can unify multi-vendor storage into a single pool but also the speed it can run at. It announced this week that its version 5.2 of its Data Platform, with a 33.7% higher IO500 score than the last one five months ago. IO500 is a benchmark designed to rank how storage systems handle HPC workloads with tests based on bandwidth and metadata performance.

Getting data to GPU clusters

Also in version 5.2 is so-called “affinitization”, where, said Presely, “We now have the ability to put data into the GPU node where the application is calling on it.”

“So, Tier 0 started out as just simple tiering but now it can place the data, affinitize it to where the job is being scheduled to run.”

Does Hammerspace predict where data will be needed by GPUs? 

Yes and no, said Presley. “We do have predictive behaviour. So, the first step is some kind of job scheduler runs. And it says I’m going to grab these three nodes and I need this data. We have the intelligence, which we’ve always had, of predicting what other data might need to be moved.”

That’s down to a quality Hammerspace has that the storage vendors don’t – a direct connection to the Linux kernel in GPU servers.

Presley said, “Parallel file systems like Lustre or GPFS, while they can put a client in the GPU node, which NVIDIA doesn’t like, we do it because it’s built into Linux already. Our client is built into Ubuntu or any distribution of Linux.” 

“So we already have tunnels in every GPU node in the world right now that are just waiting to be activated. But even if their client were there, they don’t have a global namespace. So it would be a fast, single storage silo on GPU nodes 1, 2, 3, 4 etc. Now it’s a hundred silos of data that you have to deal with.”

“For us, it goes into the global namespace immediately. So you have visibility of one namespace in all of those.”

Given Hammerspace can turn any storage into a single pool, does that mean storage should be commoditised? Presley thinks so.

“That is our vision, and the [AI Data Platform] release we’re putting out later today essentially says that.”

“That’s what happened with the compute industry. It used to be very proprietary, and then the first supercomputer that was built on Linux showed what we were doing on standards, and then the industry shifted that way. I think that’s what’s happening with data right now.”

The takeaway

Hammerspace is positioning itself as a disruptor: a vendor-agnostic data orchestration layer that pools file and object systems into a single global namespace and delivers Tier 0 performance across disparate infrastructure. Vanderbilt University’s ACCRE illustrates the impact – unlocking idle capacity and reducing storage costs by nearly half.

With version 5.2, Hammerspace boosts IO500 performance and introduces “affinitization,” placing data directly on GPU nodes as jobs are scheduled. Its deep Linux kernel integration gives it predictive movement capabilities that mainstream storage systems can’t match. The larger implication? Hammerspace’s ability to abstract and accelerate heterogeneous storage makes it look like capacity should be fully commoditised.

Read more about storage performance
Vector databases: The hidden engine of AI. Vector databases power modern AI by storing and searching high-dimensional data. Learn how vector embeddings work, their effect on data storage, plus a survey of the key providers.

Flash vs HDD: Pros and cons, and the workloads they suit best. Flash storage is reshaping enterprise IT with QLC and NVMe innovations, but HDD still dominates on cost. Explore flash vs HDD, cloud storage, and the future of all-flash datacentres.