Available Now

L1 AI Pod

The standard for on-prem inference. A self-contained, acoustically damped cluster designed for office environments.

Schedule Survey
L1 AI Pod Chassis

Hardware

Interactive Component Explorer

Explore each component of the L1 AI Pod in 3D. Hover to highlight, click for detailed specifications.

4× NVIDIA H100 PCIe

gpu

320GB HBM3 Total | 15,832 TFLOPS FP16

View Details

2× AMD EPYC 7713

cpu

128 Cores | 256 Threads | 512MB L3

View Details

512GB DDR4 ECC

memory

8× 64GB DIMMs | 3200MHz | Registered

View Details
Rotate to explore

8TB NVMe SSD

storage

7,000 MB/s Read | PCIe 4.0 x4

View Details

Dual 100GbE

network

Mellanox ConnectX-6 | RDMA Support

View Details

Power Delivery System

power

208V/30A | 2× 2000W Redundant PSU

View Details
Click components in the 3D view for technical specifications

Features

Why Choose L1 Pod

Self-contained cluster designed for office environments

Acoustically damped for minimal noise disruption

Pre-configured software stack ready for deployment

Full electrical and HVAC integration included

Ongoing hardware support and maintenance

Air-gapped security capabilities

Sub-millisecond local inference latency

Scalable to multiple units

Ready to Deploy Your L1 Pod?

Schedule your 60-minute feasibility survey to see if your office is ready for on-prem AI infrastructure.