Design Memo
CCC-DM-2026-145

Data Centre and Server Room Cooling Design

What You Need to Know

Data centres and server rooms are the highest heat density environments in commercial construction. A standard office generates 80 to 120 W/sqm of heat. A basic server room starts at 500 W/sqm. A high-density data centre can exceed 5,000 W/sqm. Every watt of electrical power consumed by IT equipment converts directly into heat that the cooling system must remove.

Cooling is not optional. IT equipment has strict operating temperature limits. ASHRAE TC 9.9 recommends a supply air temperature of 18 to 27 degrees Celsius and relative humidity of 20 to 80% for Class A1 environments. Most Australian operators target 22 to 24 degrees Celsius at the cold aisle. Exceeding these limits causes equipment throttling, accelerated component failure, and unplanned outages.

The cost of getting this wrong is measured in downtime. For a colocation facility, one hour of downtime can cost $100,000 or more. For an in-house server room supporting a business, the cost is lost productivity across every employee who depends on those systems. The cooling design must match the IT load, provide adequate redundancy, and operate efficiently at partial loads because most facilities run at 40 to 60% of their design capacity for years before reaching full load.

The Rules

  • ASHRAE TC 9.9 Class A1 recommended envelope: 18 to 27 degrees Celsius supply air, 20 to 80% RH. This is the design standard for most commercial data centres and server rooms. The allowable envelope is wider (15 to 32 degrees Celsius), but operating outside the recommended range voids most equipment warranties. (ASHRAE TC 9.9, 2021)
  • Ventilation of the building enclosure must comply with AS 1668.2:2024. Data centre spaces still require building ventilation for occupied areas (control rooms, corridors). The server halls themselves are sealed environments with recirculated air, not ventilated spaces. (AS 1668.2:2024)
  • NCC 2025 Section J energy efficiency requirements apply to the cooling plant. Minimum equipment efficiency ratings (AEER/ACOP) apply to chillers, packaged units, and split systems serving the data centre. Part J6 ductwork insulation and sealing requirements apply to any ducted distribution. (NCC 2025 Part J3, J6)
  • Uptime Institute Tier Standards define redundancy levels. Tier I has no redundancy. Tier II requires N+1 cooling redundancy. Tier III requires N+1 with concurrent maintainability. Tier IV requires 2N (fully redundant, fault tolerant). (Uptime Institute Tier Standard)
  • Fire suppression systems require HVAC coordination. Gaseous suppression (FM-200, Novec 1230) needs a sealed room. HVAC dampers must close automatically on suppression discharge. The cooling system must shut down to prevent agent dilution. (AS 1670.1, AS 4214)
  • Electrical capacity for cooling must be included in the maximum demand calculation. Cooling systems typically consume 30 to 50% as much power as the IT load itself. This must be accounted for in the electrical design and backup power systems. (AS/NZS 3000:2018)
  • Refrigerant compliance under AS/NZS 5149. All refrigeration systems must comply with charge limits and safety classification requirements. High-charge systems in occupied buildings require additional safety measures including leak detection and emergency ventilation. (AS/NZS 5149:2016)

What This Means in Practice

Start with the IT load. Every cooling design begins with the question: how much heat do the servers generate? A single standard server rack draws 2 to 5 kW. A high-performance computing rack draws 15 to 30 kW or more. Multiply the per-rack power by the number of racks, add UPS losses (typically 3 to 5% of UPS capacity), lighting, and any other heat sources in the room. That total is your cooling load.

For a typical 20 sqm server room with 6 racks at 4 kW each, the IT load is 24 kW. Add 1.2 kW for UPS losses and 0.5 kW for lighting. The total cooling load is approximately 26 kW. With N+1 redundancy, you need two units, each capable of handling the full 26 kW load. One runs while the other sits on standby, ready to take over if the primary unit fails or needs maintenance.

Airflow management is just as important as cooling capacity. The goal is to deliver cold air to the front of the server racks and remove hot air from the rear without mixing the two streams. Hot aisle and cold aisle containment achieves this by physically separating the supply and return air paths. Without containment, hot exhaust air recirculates back to the server intakes. This forces the cooling system to work harder and creates hotspots that can overheat individual servers even when the room average temperature looks fine.

Containment improves cooling efficiency by 20 to 30% and is considered essential for any installation above 3 kW average per rack. For a server room running 6 racks at 4 kW each, containment can reduce the required cooling capacity by 5 to 8 kW because the system no longer wastes energy cooling mixed air.

The UPS room is a separate cooling challenge. UPS units convert AC to DC and back again, losing 3 to 5% of the power as heat in the process. A 100 kVA UPS generates 3 to 5 kW of heat. This room needs its own cooling system, independent of the server room. UPS rooms also have battery banks that must be kept within 20 to 25 degrees Celsius for optimal battery life. Every degree above 25 degrees Celsius reduces lead-acid battery life by approximately 50% per 10 degree increase.

Sydney's climate offers a significant advantage for data centre cooling. With a design wet bulb temperature of approximately 24 degrees Celsius (Bureau of Meteorology data for Sydney Observatory Hill), free cooling using outside air or waterside economisers is viable for a large portion of the year. An economiser cycle can reduce cooling energy consumption by 30 to 40% annually in Sydney, depending on the supply air setpoint and the facility's internal heat load profile.

Key Design Decisions

1

Cooling Architecture: Perimeter, In-Row, or Rear Door

Perimeter CRAC (Computer Room Air Conditioning) and CRAH (Computer Room Air Handling) units sit around the room perimeter and distribute cooled air through a raised floor plenum. This is the traditional approach, suitable for rack densities of 2 to 5 kW per rack. In-row cooling places units between the racks, shortening the air path and handling higher densities of 5 to 15 kW per rack. Rear door heat exchangers mount directly on the back of each rack, capturing heat at the source. They handle 15 to 30+ kW per rack. For the highest densities above 30 kW per rack, direct liquid cooling or immersion cooling is the only practical option.

Trade-off: Perimeter units are the lowest cost and simplest to install, but they waste energy pushing air long distances. In-row and rear door systems are more efficient but cost 2 to 3 times more per kW of cooling capacity. Choose based on your current and planned rack density.
2

Redundancy Level: N+1 vs 2N

N+1 means one extra unit beyond what is needed. For a 200 kW IT load using 100 kW cooling units, N+1 requires three units: two running and one standby. 2N means a completely separate, parallel cooling system capable of handling the full load independently. For the same 200 kW load, 2N requires four 100 kW units in two independent groups. N+1 is the minimum for any server room. 2N is standard for mission-critical data centres where no single failure should cause an outage.

Trade-off: 2N costs roughly twice as much as N+1 for cooling equipment and associated electrical infrastructure. For a 200 kW facility, the difference is $200,000 to $400,000 in additional cooling plant. Justify it against the cost of downtime.
3

Free Cooling Strategy

Free cooling uses outside conditions to reduce or eliminate compressor-based cooling. Airside economisers draw filtered outside air directly into the data centre when the ambient temperature is below the return air setpoint. Waterside economisers use a cooling tower or dry cooler to produce chilled water without running the chiller compressor. In Sydney, waterside economisers are typically more practical than airside because they avoid introducing humidity and particulate concerns.

Trade-off: A waterside economiser adds $50,000 to $150,000 to the cooling plant cost for a medium facility. It reduces annual cooling energy by 30 to 40%, paying back in 3 to 5 years at current electricity prices. Airside economisers are cheaper to install but require high-quality filtration and humidity control.
4

PUE Target and Efficiency

Power Usage Effectiveness (PUE) is the ratio of total facility power to IT equipment power. A PUE of 2.0 means the facility uses as much power for cooling, lighting, and losses as it does for the IT equipment itself. A PUE of 1.4 to 1.6 is achievable for small to medium facilities. Purpose-built data centres should target 1.2 to 1.3. The cooling system is the largest contributor to PUE after the IT load, so every efficiency measure in the cooling design directly improves PUE.

Trade-off: Moving from PUE 1.6 to PUE 1.3 on a 500 kW IT load saves approximately 150 kW of continuous power draw. At $0.25 per kWh, that is $330,000 per year in electricity savings. The investment to achieve it (variable speed drives, economisers, containment, higher efficiency plant) typically costs $300,000 to $600,000, paying back in 1 to 2 years.
5

Fire Suppression Coordination

Gaseous fire suppression systems (FM-200, Novec 1230) require a sealed room to maintain agent concentration for the required hold time. The HVAC system must integrate with the fire alarm: all supply and return air dampers close on suppression discharge, fans shut down, and the system must not restart until manually reset. This means the room heats up during a suppression event. For a 100 kW IT load, the room temperature will rise by approximately 1 degree Celsius per minute with cooling offline. The IT equipment can typically tolerate 10 to 15 minutes before thermal shutdown.

Trade-off: Faster cooling restart reduces the risk of thermal shutdown but requires careful sequencing to avoid diluting the suppression agent. Most designs include a manual reset with a minimum 10-minute hold time before HVAC restart.

Who Needs to Know What

Need this engineered for your project?

Get a scoped fee proposal within 48 hours. Chartered engineers. Registered in NSW, VIC, and QLD.

Get a Quote → 📞 0468 033 206

References

  1. ASHRAE TC 9.9, Thermal Guidelines for Data Processing Environments, 5th Edition (2021)
  2. AS 1668.2:2024, The use of ventilation and airconditioning in buildings - Mechanical ventilation in buildings
  3. National Construction Code 2025, Part J3 - Building sealing, Part J6 - Air-conditioning and ventilation systems
  4. Uptime Institute, Tier Standard: Topology (2018)
  5. AS/NZS 5149:2016, Refrigerating systems and heat pumps - Safety and environmental requirements
  6. AS 1670.1:2018, Fire detection, warning, control and intercom systems - System design, installation and commissioning
  7. AS/NZS 3000:2018, Electrical installations (Wiring Rules)

Related design memos