DataCrunch Description

Each GPU contains 16896 CUDA Cores and 528 Tensor cores. This is the current flagship chip from NVidia®, which is unmatched in terms of raw performance for AI operations. We use the SXM5 module of NVLINK, which has a memory bandwidth up to 2.6 Gbps. It also offers 900GB/s bandwidth P2P. Fourth generation AMD Genoa with up to 384 Threads and a boost clock 3.7GHz. We only use the SXM4 "for NVLINK" module, which has a memory bandwidth exceeding 2TB/s as well as a P2P bandwidth up to 600GB/s. Second generation AMD EPYC Rome with up to 192 Threads and a boost clock 3.3GHz. The name 8A100.176V consists of 8x RTX, 176 CPU cores threads and virtualized. It is faster at processing tensor operations than the V100 despite having fewer tensors. This is due to its different architecture. Second generation AMD EPYC Rome with up to 96 threads and a boost clock speed of 3.35GHz.

Pricing

Pricing Starts At:
$3.01 per hour

Integrations

API:
Yes, DataCrunch has an API
No Integrations at this time

Reviews

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Company Details

Company:
DataCrunch
Headquarters:
Finland
Website:
datacrunch.io

Media

DataCrunch Screenshot 1
Recommended Products
Red Hat Enterprise Linux on Microsoft Azure Icon
Red Hat Enterprise Linux on Microsoft Azure

Deploy Red Hat Enterprise Linux on Microsoft Azure for a secure, reliable, and scalable cloud environment, fully integrated with Microsoft services.

Red Hat Enterprise Linux (RHEL) on Microsoft Azure provides a secure, reliable, and flexible foundation for your cloud infrastructure. Red Hat Enterprise Linux on Microsoft Azure is ideal for enterprises seeking to enhance their cloud environment with seamless integration, consistent performance, and comprehensive support.

Product Details

Platforms
SaaS
Type of Training
Documentation
Customer Support
Online

DataCrunch Features and Options

DataCrunch User Reviews

Write a Review
  • Previous
  • Next