Means of Payment
For your convenience, we accept multiple payment methods in USD, including PayPal, Credit Card, and wire transfer.
RFQ (Request for Quotations)
It is recommended to request for quotations to get the latest prices and inventories about the part.
Our sales will reply to your request by email within 24 hours.
IMPORTANT NOTICE
1. You’ll receive an order information email in your inbox. (Please remember to check the spam folder if you didn’t hear from us).
2. Since inventories and prices may fluctuate to some extent, the sales manager is going to reconfirm the order and let you know if there are any updates.
Category | Parameter | Specification / Value | Notes / Conditions |
---|---|---|---|
General | Model Number | H6614-70 | |
Product Name | [e.g., Liquid-Cooled AI Training Appliance with 4x A100 80GB] | Please specify | |
Product Type | [e.g., 4U AI Accelerated Server, Pre-validated Solution] | ||
Form Factor | 4U Rackmount | ||
Solution SKU | Pre-configured and validated for Large Model Training | Key: Turnkey solution | |
Physical | Dimensions (W x H x D) | 482.6 mm (19″) x 175.2 mm (4U) x 1100 mm | |
Weight | 75 kg | ||
Material | Cold-rolled steel | ||
Drive Bays | 8 x Hot-swappable 2.5″ NVMe U.2 Bays | Pre-configured in RAID 0 for maximum I/O | |
GPU Configuration | 4 x or 8 x NVIDIA A100/A800 80GB/40GB GPUs | Liquid-cooled, Full-height, Full-length | |
Power Supply | 4 x 3600W (Platinum Efficiency) | N+N Redundant | |
Liquid Cooling | Direct-to-Chip on CPUs and GPUs | Closed-loop within chassis or ready for CDU | |
System Architecture | Processor | Dual Intel® Xeon® Platinum 8480+ or equivalent | High-core-count CPUs to feed GPUs |
Memory | 1.5 TB DDR5 (48 x 32GB DIMMs) | Optimized for large dataset caching | |
GPU Interconnect | NVIDIA NVLink™ Bridge across all GPUs | Key for high-speed GPU-to-GPU communication | |
Network Controller | Dual-port 200G NDR InfiniBand or Ethernet | Mandatory for multi-node training scale-out | |
BMC | Redfish-compliant | ||
I/O & Connectivity | Front Panel | 2 x USB 3.2, System Status LCD | |
Rear Panel | 2 x QSFP-DD (200G InfiniBand), 2 x USB, Management LAN | ||
Cooling Ports | 2 x Coolant Inlet/Outlet (Quick-Disconnect) | ||
Software & Management | Pre-installed Software | NVIDIA Base Command Manager, NGC Container Runtime, Driver Stack | Ready-to-run upon racking |
Supported Frameworks | Validated for PyTorch, TensorFlow, NVIDIA NeMo | ||
Orchestration | Integrated with NVIDIA DGX SuperPOD software stack | For cluster deployment | |
Management | NVIDIA BMC extensions for GPU fleet management | ||
Performance | AI Training Performance | > [e.g., 5] PetaFLOPS of FP16 (Tensor Core) performance | Benchmarked specific to configuration |
GPU Memory per Node | > 320 GB (With 4x 80GB GPUs) | Enables training of multi-billion parameter models | |
NVMe Storage Bandwidth | > 20 GB/s aggregate read | ||
Environmental | Operating Temperature (Coolant) | +15 °C to +45 °C (Coolant Inlet) | |
Power Consumption (Max) | 12,000W | Typical for 4x A100 liquid-cooled config | |
Acoustic Noise | < 45 dBA | ||
Certifications & Compliance | Safety | IEC/EN/UL 62368-1 | |
EMC | FCC Part 15 Class A, CE |
Shipping Method
Currently, our products are shipped through DHL, FedEx, SF, and UPS.
Delivery Time
Once the goods are shipped, estimated delivery time depends on the shipping methods you chose:
FedEx International, 5-7 business days.
The following are some common countries’ logistic time.
Mon–Fri 8:00 AM–05:00 PM
Sat–Sun Closed