OpenMetal

Welcome to the OpenMetal Private AI Labs Program

Build, Train, and Scale your AI workloads on Private Infrastructure—Now with up to $50K in Usage Credits

$50K

in Usage Credits

Enterprise GPU Servers • Bare Metal Access • Private Infrastructure

Accelerate Your AI Projects with Confidence

Everyone's exploring how to best leverage AI for their business and their customers. With the new OpenMetal Private AI Labs program, you can access private GPU servers and clusters tailored for your AI projects. By joining, you'll receive up to $50,000 in usage credits to test, build, and scale your AI workloads. Whether you're fine-tuning LLMs, running ML pipelines, or training deep learning models—OpenMetal gives you full access to bare metal GPUs on secure, private infrastructure.

No slicing. No noisy neighbors. Just raw power and privacy to move faster.

NVIDIA A100 & H100 GPUs
Multi-GPU Configurations
Custom RAM & NVMe Options
Private, Isolated Infrastructure

Why Join Private AI Labs?

$25K–$50K in Usage Credits: Offset your PoC and early scaling costs with generous monthly credits.

Private, Bare Metal Access

No time slicing. Full control of your GPU with maximum performance and isolation.

Security & Compliance-Ready

Keep your data safe with private cloud infrastructure designed for regulated environments.

Infrastructure Built for AI

NVIDIA A100, H100, and multi-GPU configurations. Custom RAM and NVMe to fit your needs.

Optional Cluster Configurations

Need 4-8 GPUs? We've got you covered. Configure your own private AI lab.

The Labs Program is Currently Available In:

OpenMetal US East Coast Data Center (Washington D.C. Metro)
All GPU hardware is custom-built and delivered within 8-10 weeks after order placement. Clusters may take up to 12 weeks. More locations coming soon!

Credit Usage Structure

3-Year Commitment

$50,000

Up to 20% of your monthly bill

2-Year Commitment

$25,000

Up to 30% of your monthly bill

Credits are applied monthly and cannot exceed 30% of the total monthly invoice.

Who Should Apply?

  • AI/ML teams looking to escape the constraints of public cloud GPUs
  • Enterprises building confidential or compliance-sensitive models
  • Startups running PoCs or fine-tuning large language models
  • Researchers seeking consistent, high-performance GPU access

Eligibility Criteria

  • Must be a company or team actively developing or running AI/ML workloads.
  • Use case requires GPU acceleration (training, inferencing, fine-tuning, etc.).
  • Must sign a 2- or 3-year contract to receive credits.
  • Willingness to provide feedback and participate in customer success stories.
  • Can be a current customer but not currently using OpenMetal GPU infrastructure.

How the Program Works

1

Apply

Fill out the application form with your project details and use case.

2

Review & Consultation

Our team reviews your application and schedules a consultation to discuss your requirements.

3

Hardware Provisioning

Custom-built GPU hardware is provisioned and delivered within 8-10 weeks.

4

Launch & Scale

Start building with your credits applied monthly. Scale as your project grows.

OpenMetal GPU Servers and Clusters

The Private AI Labs Program was created to give easy and early access to enterprise servers by AI teams. These include fully customizable deployments ranging from large-scale 8x GPU setups to CPU-based inference. When applying for the program, refer to the hardware list below to indicate the hardware of interest.

Hardware Catalog

X-Large

The most complete AI hardware we offer. Ideal for AI/ML training, high-throughput inference, and demanding compute workloads.

8XNVIDIA SXM5 H100
GPU Memory:
640 GB HBM3
CPU:
2x Intel Xeon Gold 6530
Memory:
Up to 8TB DDR5 5600MTs
Storage:
Up to 16 NVMe drives + 2x 960GB Boot
Contact Us
Contact Us

Large

Perfect for mid-sized GPU workloads with maximum flexibility. Supports up to 2x H100 GPUs, 2TB of memory, and 24 drives each.

2XNVIDIA H100 PCIe
GPU Memory:
160 GB HBM3
CPU:
2x Intel Xeon Gold 6530
Memory:
1024GB DDR5 4800Mhz
Storage:
1x 6.4TB NVMe + 2x 960GB Boot
$4,608.00/moeq. $6.31/hr
Contact Us
1XNVIDIA H100 PCIe
GPU Memory:
80 GB HBM3
CPU:
2x Intel Xeon Gold 6530
Memory:
1024GB DDR5 4800Mhz
Storage:
1x 6.4TB NVMe + 2x 960GB Boot
$2,995.20/moeq. $4.10/hr
Contact Us
2XNVIDIA A100 80G
GPU Memory:
160 GB HBM2e
CPU:
2x Intel Xeon Gold 6530
Memory:
1024GB DDR5 4800Mhz
Storage:
1x 6.4TB NVMe + 2x 960GB Boot
$3,087.36/moeq. $4.23/hr
Contact Us
1XNVIDIA A100 80G
GPU Memory:
80 GB HBM2e
CPU:
2x Intel Xeon Gold 6530
Memory:
1024GB DDR5 4800Mhz
Storage:
1x 6.4TB NVMe + 2x 960GB Boot
$2,234.88/moeq. $3.06/hr
Contact Us

Medium

Low cost GPU workloads. Less flexible than our large GPU deployments, but far more powerful than CPU inferencing.

1XNVIDIA A100 40G
GPU Memory:
40 GB HBM2e
CPU:
AMD EPYC 7272
Memory:
256GB DDR4 3200MHz
Storage:
1TB NVMe
$714.24/moeq. $0.98/hr
Contact Us

Small – CPU Based

Running AI inference using Intel's 5th Generation and AMX is the most affordable option. Ideal for small models and non-production use-cases.

Pricing shown requires a 3-year agreement. Lower pricing may be available with longer commitments. Final pricing will be confirmed by your sales representative and is subject to change.

Apply to be part of the AI Private Labs Program

Join the OpenMetal Private AI Labs program and bring your ideas to life with enterprise-grade GPUs.

Still Have Questions?

Schedule a Consultation

Get a deeper assessment of your use case scenario and discuss your unique requirements for your AI workloads before applying for the program.

Schedule Meeting

AI/ML Training

GPU Compute

Compliance

Usage Credits