Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
IREN’s AI Cloud is a cutting-edge GPU cloud infrastructure that utilizes NVIDIA's reference architecture along with a high-speed, non-blocking InfiniBand network capable of 3.2 TB/s, specifically engineered for demanding AI training and inference tasks through its bare-metal GPU clusters. This platform accommodates a variety of NVIDIA GPU models, providing ample RAM, vCPUs, and NVMe storage to meet diverse computational needs. Fully managed and vertically integrated by IREN, the service ensures clients benefit from operational flexibility, robust reliability, and comprehensive 24/7 in-house support. Users gain access to performance metrics monitoring, enabling them to optimize their GPU expenditures while maintaining secure and isolated environments through private networking and tenant separation. The platform empowers users to deploy their own data, models, and frameworks such as TensorFlow, PyTorch, and JAX, alongside container technologies like Docker and Apptainer, all while granting root access without any limitations. Additionally, it is finely tuned to accommodate the scaling requirements of complex applications, including the fine-tuning of extensive language models, ensuring efficient resource utilization and exceptional performance for sophisticated AI projects.
Description
Create a robust NVMe over Fabrics high-performance shared storage solution with MayaScale that allows for the integration of directly attached NVMe resources into a unified storage pool. This solution enables the flexible provisioning of NVMe namespaces to clients who require high performance with minimal latency. After usage, clients have the option to return NVMe storage back to the shared pool, eliminating issues associated with over-provisioning or unutilized NVMe storage typical of direct-attached setups. The network-agnostic architecture employs RDMA for on-premises deployments and standard TCP for cloud environments, ensuring versatility. Clients can access true NVMe devices using a conventional NVMe driver stack, negating the need for any proprietary drivers. You can easily configure and implement NVMe over Fabrics SAN infrastructure at rack scale in your data center by aggregating diverse NVMe devices through RDMA-compatible connections, such as ROCE, iWARP, or Infiniband. Furthermore, even in public cloud settings, users can harness the benefits of NVMe over Fabrics via the standard TCP/IP protocol, which eliminates the requirement for specialized RDMA hardware or SRIOV virtualization. This innovative approach optimizes resource utilization while maintaining high performance across various deployment scenarios.
API Access
Has API
API Access
Has API
Integrations
AWS Marketplace
Amazon
Amazon RDS
DeepSeek
Dell Technologies Cloud
Docker
Google Cloud Platform
JAX
Llama
Microsoft Azure
Integrations
AWS Marketplace
Amazon
Amazon RDS
DeepSeek
Dell Technologies Cloud
Docker
Google Cloud Platform
JAX
Llama
Microsoft Azure
Pricing Details
No price information available.
Free Trial
Free Version
Pricing Details
No price information available.
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
IREN
Country
Australia
Website
www.iren.com/solutions/gpu-cloud/ai-cloud
Vendor Details
Company Name
ZettaLane Systems
Founded
2018
Website
www.zettalane.com/maya-nvmeof-linux-rdma-tcp.html