Means of Payment
For your convenience, we accept multiple payment methods in USD, including PayPal, Credit Card, and wire transfer.
RFQ (Request for Quotations)
It is recommended to request for quotations to get the latest prices and inventories about the part.
Our sales will reply to your request by email within 24 hours.
IMPORTANT NOTICE
1. You’ll receive an order information email in your inbox. (Please remember to check the spam folder if you didn’t hear from us).
2. Since inventories and prices may fluctuate to some extent, the sales manager is going to reconfirm the order and let you know if there are any updates.
Category | Parameter | Specification / Value | Notes / Conditions |
---|---|---|---|
General | Model Number | H12690-300 | |
Product Name | [e.g., ExaScale AI Training Cluster Rack] | *”300″ signifies a 300 PetaFLOPs+ performance tier* | |
Product Type | [e.g., Multi-Rack Scale System] | May encompass multiple physical racks as one logical unit | |
Solution SKU | Pre-validated for ExaScale-Class AI Training | The ultimate turnkey solution for frontier models | |
System Scale & Architecture | Physical Footprint | Multiple 42U Racks (e.g., 4-8 Racks as a single system) | A “cluster in a box” at rack-scale |
Total System Power | > 200 kW | Requires dedicated data center power and cooling | |
Compute Density | > 300 GPUs (e.g., 32 nodes x 8 GPUs, or 75 nodes x 4 GPUs) | The “300” may refer to the GPU count | |
Fabric Architecture | Full, Non-Blocking 800 Gb/s (NDR) or 1.6 Tb/s (XDR) InfiniBand Fabric | Next-generation fabric to eliminate communication bottlenecks | |
Liquid Cooling | Rack-level, Direct-to-Chip Liquid Cooling for all CPUs and GPUs | Mandatory for this density and power | |
Aggregate Performance | AI Compute (FP8) | > 300 PetaFLOPS | The definitive performance target of this SKU |
AI Compute (FP16) | > 150 PetaFLOPS | ||
Total GPU Memory | > 60 Terabytes (e.g., with 300x 200GB GPUs) | Enables training of trillion-parameter models with ease | |
Parallel File System | Multi-petabyte, all-NVMe parallel file system | Sustained throughput > 1 Terabyte/s | |
Software & Stack | Cluster OS | Customized Linux distribution for large-scale HPC/AI | |
Orchestration & Scheduler | Enhanced Slurm or Kubernetes for 10,000+ core jobs | ||
AI Framework Optimization | Deeply optimized PyTorch and TensorFlow distributions | ||
Monitoring & Management | Centralized system-wide view of health, performance, and power/thermal data | AI-powered predictive maintenance | |
Solution & Support | Professional Services | Mandatory on-site deployment by specialized engineering team | |
Support | 24/7/365 Mission-critical support with dedicated engineers | Guaranteed SLAs | |
Solution Validation | Certified for training state-of-the-art LLMs (e.g., GPT-4 class and beyond) |
Shipping Method
Currently, our products are shipped through DHL, FedEx, SF, and UPS.
Delivery Time
Once the goods are shipped, estimated delivery time depends on the shipping methods you chose:
FedEx International, 5-7 business days.
The following are some common countries’ logistic time.
Mon–Fri 8:00 AM–05:00 PM
Sat–Sun Closed