Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
AMD Developer Cloud grants immediate access to high-performance AMD Instinct MI300X GPUs for developers and open-source contributors through a convenient cloud-based interface, featuring a ready-to-use environment that includes Docker containers and Jupyter notebooks, eliminating the need for any local setup. Developers can execute various workloads such as AI, machine learning, and high-performance computing on configurations tailored to their needs, whether opting for a smaller setup with 1 GPU providing 192 GB of memory and 20 vCPUs or a larger setup that includes 8 GPUs with a staggering 1536 GB of GPU memory and 160 vCPUs. The platform operates on a pay-as-you-go model linked to a payment method and offers initial complimentary hours, like 25 hours for qualifying developers, to facilitate hardware prototyping. Importantly, users maintain complete ownership of their projects, allowing them to upload code, data, and software freely without relinquishing any rights. Furthermore, this seamless access empowers developers to innovate rapidly and explore new possibilities in their respective fields.
Description
Create a robust NVMe over Fabrics high-performance shared storage solution with MayaScale that allows for the integration of directly attached NVMe resources into a unified storage pool. This solution enables the flexible provisioning of NVMe namespaces to clients who require high performance with minimal latency. After usage, clients have the option to return NVMe storage back to the shared pool, eliminating issues associated with over-provisioning or unutilized NVMe storage typical of direct-attached setups. The network-agnostic architecture employs RDMA for on-premises deployments and standard TCP for cloud environments, ensuring versatility. Clients can access true NVMe devices using a conventional NVMe driver stack, negating the need for any proprietary drivers. You can easily configure and implement NVMe over Fabrics SAN infrastructure at rack scale in your data center by aggregating diverse NVMe devices through RDMA-compatible connections, such as ROCE, iWARP, or Infiniband. Furthermore, even in public cloud settings, users can harness the benefits of NVMe over Fabrics via the standard TCP/IP protocol, which eliminates the requirement for specialized RDMA hardware or SRIOV virtualization. This innovative approach optimizes resource utilization while maintaining high performance across various deployment scenarios.
API Access
Has API
API Access
Has API
Integrations
AWS Marketplace
Amazon
Amazon RDS
Docker
Google Cloud Platform
Jupyter Notebook
Microsoft Azure
MongoDB
MySQL Workbench
Oracle Cloud Infrastructure FastConnect
Integrations
AWS Marketplace
Amazon
Amazon RDS
Docker
Google Cloud Platform
Jupyter Notebook
Microsoft Azure
MongoDB
MySQL Workbench
Oracle Cloud Infrastructure FastConnect
Pricing Details
No price information available.
Free Trial
Free Version
Pricing Details
No price information available.
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
AMD
Founded
1969
Country
United States
Website
www.amd.com/en/developer/resources/cloud-access/amd-developer-cloud.html
Vendor Details
Company Name
ZettaLane Systems
Founded
2018
Website
www.zettalane.com/maya-nvmeof-linux-rdma-tcp.html