Delivering Vision with Precision in Data Center Cooling

At Triton Thermal, we believe that solving today’s most complex data center challenges requires more than just hardware, it demands a tightly integrated ecosystem of technology, infrastructure, engineering, and experience.

As compute density surges, AI/HPC workloads are redefining power and thermal requirements and Triton sits at the intersection of cooling innovation, infrastructure enablement, and ecosystem orchestration with the coolest technology from chip to rooftop.

From Chip to Rooftop: Our Systematic Approach

We work across the full technology stack, delivering and integrating advanced thermal solutions designed for today’s hybrid compute environments:

Whitespace Systems

  • Direct Liquid Cooling (DLC), Rear Door Heat Exchangers (RDHx), Immersion Cooling and CDUs designed for AI/HPC rack densities up to 150kW today and into the 100’s of kW in the near future.
  • Our deployments maximize TFLOPS/GPU and PFLOPS/Cluster for the most efficient cooling design for your IT infrastructure and workloads

Gray Space Infrastructure

  • CDU/HDU, CRAHs, air and water-cooled chillers, drycoolers, and central pumping systems. We’ve built three decades of deep expertise and proven success in the space.
  • Scalable solutions purpose built with optimal PUE and maximized heat rejection efficiency from the start.

Modular & Scalable Deployment Models

  • Pre-packaged and standarized form factors across chiller and water plants, CDUs, VRF solutions, hybrid cooling blocks
  • Accelerated time to market for Megawatt density uplift for greenfield builds or brownfield retrofits
  • Optimized supply chain to reduce lead times and enable rapid deployment and compute activation.
Our deep partnerships with leading OEMs, MEPs, and EPCs allows our team to tailor hybrid architectures, integrating air, liquid, and refrigerant-based systems based on the workload profile, application load, site constraints, and sustainability goals.

Beyond Equipment: AI, Cyber, and Energy Strategy

Modern thermal design is no longer limited to mechanical systems. Triton’s ecosystem strategy extends into three transformational layers:

  • AI-Driven Cooling Optimization: Today’s datacenters must use AI to run AI with platforms that adapt in real-time to changing workloads and environmental conditions. From predictive equipment faults to adaptive chilled water setpoints and everything in between, we help you deploy smarter, leaner thermal performance solutions.
  • Cybersecurity for Smart Cooling Infrastructure: As all mechanical systems become networked and intelligent, they must also be secure. Triton considers cybersecurity in every deployment. From segmenting PLC/DDC networks to compatibility with enterprise NAC and SIEM/SOC/NOC tools, we help you assess and align with your cybersecurity strategy and policies.
  • Energy + Infrastructure Enablement: We help clients not just deploy cooling systems but find the land, power, and the engineering teams needed to build them. Whether it’s site selection, utility interconnect, or sourcing compute capacity, our network gives clients a head start on execution.
single phase immersion cooling image
We bridge:

Power holders (landowners, developers, utilities)

Compute builders (hyperscalers, colocation providers and enterprises)

Compute consumers (enterprises running AI/HPC)

Design teams (MEPs, EPCs, OEMs)

Our Role: The Ecosystem Connector

Whether you’re a hyperscaler designing a 500MW site or an enterprise retrofitting for liquid cooling, Triton is built to align your team with the right technology, partners, and execution strategy. We don’t just sell equipment — we design solutions, engineer around constraints, and activate ecosystems to unlock performance at every level.

Solution Cooling Distribution Unit (CDU) Rear Door Heat Exchangers (RDHx) Chillers (Free Cooling and Scroll) Two-Phase DLC (Cold Plate + CDUs) Immersion Cooling Integrated Liquid Rack (Datacenter in a Box) Traditional Air Cooling (RTU, AHU, CRAH, CRAC, Dry Coolers) Components (Pumps, Fans, Coils, Condensors, Controls)
Brand

Motivair GRC

Accelsius Stulz

Alliance/Daikin

Motivair

Stulz

Motivair

Daikin

Petra

Accelsius GRC ICEraQ Series Tanks and CDUs

DDC Solutions S-Series Liquid Cooling Rack

Daikin Stulz

Daedex Petra

Various Partners and OEMs
Cooling Capacity 50KW – 4MW (varies by model and application) Up to 80 kW per rack High-lift mag-bearing centrifugal, screw and scroll Chillers 30 to 2,500 tons 80, 150, 250 kW systems

Micro (45kW – 90kW) small form factor

SX-Series (Up to 370 kW) 42U

Up to 100 kW per rack (up to 600 kW w/ DLC integration) Customizable tonnage, CFM aligned with datacenter/ building requirements Sized per system requirements
Use Cases Liquid-cooled white space (CPU/GPU) Retrofits and new deployments supporing all liquid cooling Enterprise, AI, mission-critical, building and edge environments GPU-dense AI/HPC deployments

Ultra-dense AI/HPC Datacenter

Edge compute

High-density enterprise deployments Gray space integration for liquid-cooled environments Complements air and liquid cooling systems
Technical Application

Delivers TCS Fluid to cold plates on GPUs

Floor standing, in-rack or under floor options

High density air-to-liquid HX on standard racks w/o disrupting existing airflow designs Air cooled with integral free cooling coils or water cooled with condenser served by dry coolers doubling
as free cooling
Refrigerant-based, high efficiency 42U in Single, Duo, Quad tank configurations with dielectric fluid + CDU+ Plumbing Built-in CDU and distribution system Necessary for ancillary spaces (Comm room, Network, Electrical, UPS/Battery, etc) Custom integration based on deployment needs
Feature Rear Door Heat Exchanger (RDHx) DLC (with cold plate) Immersion Cooling Localized Hot/Cold Aisle Cabinet
Cooling Density Up to 80 kW per rack 80–150 kW per rack 45 – 370 kW per tank 100 kW per rack (up to 400 kW+ with DLC)
White Space Utilization Maintains standard rack footprint Possible footprint increase depending on config Dedicated tank replacing rack footprint Cabinet footprint is industry standard
Retrofit Compatibility Easily mounts to existing racks, ideal for phased upgrades Server-based upgrades enable targeted modifications per performance boosts Modular deployment approach enables stepwise modernization Can be installed by row or cabinet as needed
Scalability Grows with your rack count — perfect for incremental builds Scales effectively across high-performance clusters Highly scalable in pod- or rack-based deployments Engineered for high-density expansion and flexible scaling
Facility Integration Minimal adjustments to existing layout Integration with existing power/cooling may require coordination Tank design and layout is key to deployment planning Deployment tailored to site layout with adaptable integration paths
CapEx Planning Lowest, cost-effective entry into liquid cooling Balanced investment for long-term performance gains Strategic investment for ultra-dense efficiency and long term reliability Competitive CapEx with built-in density optimization
Ongoing OpEx Balanced energy efficiency with flexible operation Low operating cost once deployed with direct-to-chip efficiency Among the most energy-efficient cooling technologies available Predictable operating costs with controlled thermal zones
Ongoing Maintenance Simple water-loop maintenance with field-proven reliability Routine service on manifolds or plate exchangers as needed Minimal — sealed systems reduce routine service intervals Low — closed-loop CDU systems limit wear and maintenance
Deployment Path Straightforward install on existing racks. Quick time to value Requires configuration and planning that is well-supported by vendors More involved due to form factor and layout. Ideal for new build or strategic upgrades Moderate planning effort balanced by simplicity of plug-and-play cabinet design
Workload Applications Strong choice for supporting growing AI workloads Optimized for high-performance servers and AI infrastructure Exceptional cooling for the most demanding compute enviornment (AI/ML/HPC) Built for GPU-intensive environments with optional DLC uplift
Liquid Handling at Rack Integrated coil manages cooling efficiently @ back of the rack Direct-to-chip cold plate and manifold for precise heat extraction Full-rack server immersion offers maximum hardware liquid cooling Built-in liquid delivery with optional DLC integration for immediate rack kW boost

Use Case: GPU Chip-to-Roof with Two-Phase Cooling

  • At the chip level, 2-phase cold plates extract heat directly from the GPU (and/or CPU).
  • In-rack CDU (iCDU) collects and condenses non-conductive refrigerant vapor.
  • Triton integrates the CDU into the gray space using (1) refrigerant-to-water HX or (2) refrigerant-refrigerant.
  • From there, the water loop is sent to chillers or dry coolers for outdoor heat rejection.
  • Our HVAC controls complete the return loop, optimizing temperature and flow.
  • GPU (DLC/Cold Plate) → Server Rack (Manifold) → CDU → Gray Space HX → Chillers / RTUs → Cooling Tower or Dry Cooler → Return Loop

Use Case: Immersion Stack – Tank-to-Roof Design

  • Chip and server heat is absorbed in immersion tank with dielectric coolant.
  • Heat is transferred to a facility water loop through a heat exchanger.
  • Triton connects CDU (in-rack, on-floor or under-floor options) to chiller free or chilled water + cooling town/dry cooler depending on site needs and restrictions.
  • Full loop is monitored and managed by integrated controls, ensuring uptime and optimal performance.
  • GPU/Server in Immersion Tank → Water Loop HX → CDU → Chiller → Outdoor Unit → Return to Tank

Use Case: Hot/Cold Aisle Rack + DLC All-in-One with Gray Space Support

  • Built-in CDU and liquid loop for up to 100kW air-cooled rack.
  • Up to 400 kW integrating DLC in liquid ready server cabinet (42U to 60U)
  • DLC running on GPUs at the server level (if applicable) using refrigerant or water
  • Tie-in to facility loop and Triton’s external heat rejection systems.
  • Ideal for modular builds, brownfield retrofits and server “roll-ins”. Speedy deployments

Cooling Distribution Units: How We’ve Deployed

Triton Use Case CDU Sizing Integration
Brownfield Upgrade (10 RDHx @ 40–50kW) 400–600kW CDUs Connects to air-to-liquid rear door units, uses dry cooler or cooling tower
HPC Cluster (10 racks @ 100kW DLC) 1–2MW CDUs Supports Accelsius or DDC S-Series liquid loops, uses water-to-refrigerant loop (or VRF)
Greenfield AI Datacenter (50MW hall) Multiple 2–4MW CDUs Centralized pumping with N+1, interfaces to chiller plant or hybrid air+liquid solution
Edge/Modular POD (3–5 racks @ 50–100kW) 150–600kW CDU Compact loop with rooftop condenser, quick-deployable with fast GPM startup and drain

CDUs are the nucleus to Triton’s enterprise and data center cooling solutions, specifically around:

  • DLC (Direct Liquid Cooling)
  • Rear Door Heat Exchangers (RDHx)
  • Two-phase systems (Accelsius)
  • Immersion (GRC)
  • VRF/VRV and Chiller-based Gray Space systems
CDU

Chip to CDU Loop Use Case (White Space):

  • CDUs serve as critical hydraulic interfaces between server-side cooling (e.g. Immersion or DLC cold plates) and facility loops (e.g. dry cooler, adiabatic cooler, VRF loop).
  • Models like the 600kW to 1MW+ are ideal for groups of liquid-cooled racks at 80–130kW each and support increased kW as server and GPU requirements surge.
  • In higher density deployments (e.g. 300kW racks), the 2MW–4MW CDUs offer centralized pumping and heat exchange.
  • Heat from cold plates or immersion systems enters CDU → heat exchanged to secondary loop → rejected by dry cooler or VRF condensers.

Integration with Air-Side Systems Use Case (Hybrid Cooling): Hybrid RDHx + DLC

  • In retrofits and reloads Triton adds RDHx to air-cooled racks
  • CDUs can supply chilled liquid to RDHx and handle GPM flow and lower head pressures.
  • Supports transitional builds where rear door units remove most heat, and liquid is cooled via rooftop dry coolers or modular chillers.
modern computer