io.net Partners with Walrus to Enable Decentralized Storage and Compute Capability for AI and ML Applications
On-demand cloud compute provider io.net builds Bring Your Own Model (BYOM) platform on Walrus.

io.net, one of the largest on-demand cloud compute providers that deploys and manages decentralized GPU clusters from geo-distributed sources, is integrating with Walrus, the decentralized data storage protocol built on Sui. The integration offers a more secure, cost-effective, and composable alternative to traditional cloud computing providers and overcomes the challenges of centralized storage solutions to power decentralized AI, machine learning (ML) storage, and compute capability.
The integration between Walrus and io.net delivers a Bring Your Own Model (BYOM) platform that allows users to integrate their own custom-built AI models rather than relying solely on pre-built or curated models. With io.net providing the necessary GPU clusters to train and inference AI, Walrus' decentralized storage solution protects proprietary models, providing a tamper-proof AI data storage and compute solution.
"Traditional centralized cloud models are not only expensive — they come with significant privacy risks and limited composability options that are challenging for developers who prioritize decentralization," said Rebecca Simmonds, Managing Executive at Walrus Foundation. "By leveraging our decentralized data storage solution, io.net will be able to provide the necessary compute power for advanced AI and ML development without any of the drawbacks of traditional models, making this a clear win for developers, users, and the entire Web3 industry."
In addition to the benefits of decentralized storage and compute functionality, Walrus will also enable Private Compute Execution, allowing for models stored on Walrus to be pulled directly into io.net GPU clusters for training or fine-tuning. Offering top-level encryption and access control, the integration will also feature a pay-as-you-go billing structure, ensuring that developers only pay for compute and storage that they actually use.
io.net manages a vast network of decentralized, geo-distributed GPUs, accessible from virtually any location across the globe. Its Internet of GPUs is specifically architected for low-latency, high processing demand use cases, which provides the necessary power for AI/ML development, as well as other applications like cloud gaming. Recently, io.net rolled out its IO Intelligence platform, a free-inference platform that provides access to up to 30 free open-source models, including Llama, Deepseek, and other popular agentic frameworks.
"Partnering with Walrus unlocks a game-changing opportunity for AI/ML teams," said Tausif Ahmed, Chief Business Development Officer at io.net. "By integrating Walrus' secure, decentralized storage with io.net's distributed compute, we're empowering users to deploy models affodably and privately, paving the way for a new era of decentralized AI innovation."
The integration demonstrates that Walrus is becoming an essential part of the decentralized AI technology stack. A beta test of io.net's BYOM platform — including model upload, compute, and billing — is currently underway, with a full launch set for the coming weeks.