北 京 工 翔 科 技 有 限 公 司




Design Criteria

HVACLoad considerations

HVACSystems and Components



OtherSystems and considerations







DATACOM (data processing and telecommunications)facilities are predominantly occupied by computers, networking equipment,electronic equipment, and peripherals. Themost defining HVAC characteristic of data and communications equipment centersis the potential for exceptionally high sensible heat loads (often orders ofmagnitude greater than a typical office building). In addition, the equipment installed in these facilities typically:


.Servesmission-critical applications (i.e., continuous operation)

.Has special environmental requirements(temperature, humidity, and cleanliness)

.Has the potential for disruptiveoverheating and equipment failure caused by loss of cooling




Design of any datacomfacility should also address the fact that most datacom equipment will bereplaced multiple times with more current technology during the life of thefacility. As described in Datacom Equipment Power Trends and Cooling applications(ASHRAE2005a), typical datacom equipment productcycles are 1 to 5 years, whereas facilities and infrastructure HAVC life cyclesof 10 to 25 years. Replacement equipment has historicallyrequired more demanding power and cooling requirements.


Understanding these critical parameters is essential to datacom facilitydesign.


Thepreparation of this chapter is assigned to TE 9.9, Mission-critical Facilities,Technology Spaces, and Electronic Equipment.


Design Criteria

19.1 设计标准

Types ofdatacom(ASHRAE 2005a) equipment that require air conditioningto maintain proper environmental conditions include

数据通信中心(ASHRAE 2005a)的空调系统的工作环境中,包含的数据通信设备类别有:

·Computerservers (2U and greater)

·Computer servers(1U, blade,and custom)

·Communication (High-density)



·Storage servers


·Otherrack- and cabinet-mounted equipment


·服务器(1, black, and custom)







Personnel also occupy datacom facilities,but their occupancy is typically transient and environmental conditions (e.g.,temperature, noise) are more typically dictated by equipment needs. However, humanoccupancy in smaller datacom facilities may influence the ventilation airquantity. A data center is a building or portion of a building whose primaryfunction is to house a computer room and its support areas; data centers typicallycontain high-end servers and storage products with mission-critical functions.Personnel also occupy datacom facilities, but their occupancy is typicallytransient and environmental conditions are usually more dictated by equipment needs,thereby making it more of a process cooling application rather than comfort cooling.However, human occupancy in smaller datacom facilities may influence ventilationair requirements.




Environmental requirements of datacomequipment vary depending on the type of equipment and/or manufacturer. However,a consortium of server manufacturers has agreed on a set of four standardizedconditions (Classes 1 to 4), listed in Thermal Guidelines forData Processing Environments(ASHRAE 2008).A fifth classification, the Network Equipment-Building Systems (NEBS) class, istypically used in telecommunications.

不同类型的设备或制造商,他们的数据通信设备对环境的要求都有所不同。然而,服务器制造厂家们达成了四个标准原则,此四条标准原则列于数据中心冷却指南(Thermal Guidelines for DataProcessing Environments ASHRAE 2008)第五个分类,网络设备构建系统(NEBS)类,通常用于通信。

·Class1: typically a datacom facility with tightly controlled environmental  parameters (dew point, temperature, and relative humidity) and mission-critical operations;types of products typically designed for theseenvironments are enterprise servers and storageproducts.


·Class 2: typically a datacom space or office or labenvironment with some control of environmental parameters (dew point, temperature,and relative humidity); types of products typically designed for thisenvironment are small servers, storage products, personalcomputers, and workstations.


·Class 3: typically an office, home, or transportableenvironment with little control of environmental parameters (temperature only);types of products typically designed for this environment are personal computers,workstations, laptops, and printers.


·Class 4: typically a point-of-sale or light industrialor factory environment with weather protection, sufficient winter heating, andventilation; types of products typically designed for this environment are point-of-saleequipment, industrial controllers, or computers and handheld electronics suchas PDAs.


.NEBS: per Telcordia(2001, 2006), andtypically a telecommunications central office with some control ofenvironmental parameters (dew point, temperature and relative humidity); typesof products typically designed for this environment are switches, transportequipment, and routers.

·NEBS(网络设备构建系统):根据Telcordia(2001, 2006)规定,通常电信中心需要有环境控制装置(露点,温度和相对湿度);交换机、传输设备和路由器一般按照此条规定的环境参数设计。

Because Class 3 and 4 environments are not designed primarily fordatacom equipment, they are not covered further in this chapter; refer toASHRAE's (2008) Thermal Guidelines for DataCenterenvironments for further information.

由于第三和第四条不以数据通信设备为主,将不在下面章节中讨论。如想了解更多信息请参考数据中心冷却指南(Thermal Guidelines for DataProcessing Environments ASHRAE 2008)

Environmental Specifications


 Table1 lists recommend and allowable conditions for Class 1, Class 2, and NEBS environments,as defined by the footnoted sources. Figure 1 A shows recommended temperature and humidity conditions for theseclasses on a psychrometric chart, and Figure 1 B showsallowable temperature and humidity conditions. Note that dew-point temperatureand relative humidity are also specified.


1A数据中心Class 1Class 2 NEBS推荐的环境参数

1 数据中心Class 1 Class 2NEBS设计参数

a. Inlet conditions recommended in ASHRAE (2008).

a. ASHRAE (2008)推荐送风温度;

b. Percentage values per ASHRAE Standard 52. 1dustspot efficiency test. MERV values per ASHRAE Standard 52.2

b. 平均值按照ASHRAE标准52.1除尘点效率测试,MERV(Minimum Efficiency ReportingValue)值按照ASHRAE标准52.2

c. Telcordia (2006).

d. Telcordia (2001).

e. Generally acceptedtelecommunications practice. Telecommunications central offices are not generallyhumidified, but personnel are often grounded to reduce electrostatic discharge(ESD)

e. 通信行业常规做法。通信中心通常不需要加湿,但人员需要经常接地,以减少静电放电(ESD,Electro-Static discharge)

f. See Figure 2 fortemperature derating with altitude

f. 见图2温度梯级

 Airdensity also affects the ability of datacom equipment to be adequately cooled.ASHRAE's (2008) Thermal Guidelines for Data ProcessingEnvironments suggests that data center products be designed to operate up to 3050 m altitude, but recognizes that there isreduced mass flow and convective heat transfer associated with lower airdensity at higher elevations. To account for thiseffect, the guideline includes a derating chart for the maximum allowable temperatureof 1 K per 300 maltitude above 900 m(Classes 1 to 4). Figure 2 shows the altitude deratingrecommended by ASHRAE (2004) for Classes 1 and 2, andfor NEBS.

空气的密度也会影响数据通信设备的冷却效果。ASHRAE (2008) 数据中心冷却指南(ThermalGuidelines for Data Processing Environments ASHRAE 2008)指出,数据中心设备工作环境的海拔高度不要超过3050m,同时也指出,随着海拔高度的增加,空气的密度逐渐降低,对空气的质量流量和设备的对流换热有一定的影响。考虑到这些因素,数据中心冷却指南给出了一张阶梯图,在海拔900m以上,每升高300m,设备的最大允许温度下降1(14)ASHRAE(2004)给出,随着海拔高度的增加1类、2类和NEBS的工作环境温度范围。

The statedenvironmental conditions are as measured at the inlet to the data andcommunications equipment, and not average space or return air conditions.




The allowabletemperature range is a statement of functionality, whereas the recommendedrange is a statement of reliability. Thus,equipmentexposed to prolonged high temperatures (and/or to steep temperature gradients) canexperience increased failure rates, reduced service life, hardware and/orsoftware failures, and/or thermal shutdown. Exceeding the recommended limitsfor short periods of time should not be a problem, but running near theallowable limits for months could result in increased reliability issues. Facilitydesigners and operators should strive for continuous operation in the recommendedrange. ASHRAE (2008) and Telcordia (2001) recommendedrange is 18 to 27.

尽管建议的设备的允许温度范围是可靠的,但它只是一个功能说明。因此,当设备长期暴露在高温环境下(温度波动频率或幅度大),就会提高设备的故障率,降低设备使用寿命,硬件/软件故障率提高,传热性能下降。短时间超过允许温度范围不会对设备造成影响,但如果几个月一直在接近温度范围边界运行,设备的上述问题出现的概率会提高。设计师和管理者都应该努力使设备在允许温度区间内运行。ASHRAE(2008) Telcordia(2001)推荐的温度范围是1827

Fig. 1B Allowable DataCenterClass 1, Class 2, and NEBS Operating Conditions

1B 数据中心1类、2类和NEBS的允许运行参数

Not only is air temperature into the electronicscritical for reliable operation of components in the electronic box, but theair discharged from the electronics and flowing over the components (cabling, connectors,etc.) at the exit must also be addressed. The recommended ranges apply to inletsof all equipment in the data center (except where IT manufacturers specify otherranges).


Attention is needed to make sure theappropriate inlet conditions are achieved for the top portion of IT equipmentracks.


The inlet air temperature in many data centerstends to be warmer near the top of racks, particularly if warm rack exhaust airdots not HAVC a direct return path to the CRACs. This warmer air also affectsthe relative humidity, resulting in lower values at the top of the rack. Theair temperature generally follows a horizontal lint on the psychometric chartwhere absolute humidity remains constant but relative humidity decreases.

Fig. 2  Class 1, Class 2, and NEBS Allowable TemperatureRangeVersus Altitude

Temperature Rate of Change

  Some datacom manufacturers HAVC established criteria for allow-ablerates of environmental change to prevent shock to the data and communicationsequipment. These criteria need to be reviewed for all installed datacomequipment. A maximum inlet temperature change of 5 K/h is recommended by ASHRAE(2008) for Classes 1 and 2. Humidity rate of change is typically most importantfor tape and storage products. Typical requirements for tape are a rate of changeof less than 2 K/h and a relative humidity change of less than 5%/h (ASHRAE2004).

 Intelecommunications central offices, the NEBS a requirement per Telcordia (2006)for testing new equipment is a rate change (cooling) of 30 K/h. However, in theevent of an air-conditioning failure, the rate of temperature change can easilybe significantly higher. Consequently, Telcordia (2001, 2006) prescribestesting with a warming gradient of 96 K/h for 15 min. Manufacturers' requirementsshould be reviewed and fulfilled to ensure that the system functions properlyduring normal operation and during start-up and shutdown

Procedures must be in place for response to an event that shuts downcritical cooling systems while critical loads continue to operate, causing thespace temperature to begin rising immediately. Procedures should also be inplace governing how quickly elevated space temperatures can be returned tonormal to avoid thermal shock damage.

Datacom equipment usually tolerates a somewhat wider range of environmentalconditions when not in use [sec Table 2.1 in ASHRAE (2008)]. However, it may be desirable to provideuninterruptible cooling in the room to maintain operating limits and minimize thermalshock to the equipment.



 Highrelative humidity may cause conductive anodic failures (CAF), hygroscopic dustfailures (HDF), tape media errors and excessive wear, and corrosion. In extremecasts, condensation can occur on cold surfaces of liquid-cooled equipment. Lowrelative humidity may result in electrostatic discharge (ESD), which can destroyequipment or adversely affect operation. Tape products and media may HAVC excessiveerrors when exposed to low relative humidity. In general, facilities should bedesigned and operated to maintain the recommended humidity range in Table 1,but excursions into the allowable range (more typically the equipment specification)should not significantly shorten equipment operating lift.

Filtrationand Contamination

 Beforebeing introduced into the data and communications equipment room, outside airshould be filtered and preconditioned to remove particulates and corrosivegases. Table 1 contains both recommended and minimum filtration guidelines forrecirculated air in a data center. Particulates can adversely affect data and communicationsequipment operation, so high-quality filtration and proper filter maintenance areessential. Corrosive gases can quickly destroy the thin metal films andconductors used in printed circuit boards, and corrosion can cause highresistance at terminal connection points. In addition, the accumulation ofparticulates on surfaces needed for heat removal (e.g., heat sink fins) candegrade heat removal device performance. Further information on filtration and contaminationin data centers can be found in Chapter 8 of Design Considerations for DatacomEquipment Centers (ASHRAE 2005b) and Particulate and Gaseous Contamination inDatacom Environ meets (ASHRAE 2009)


 Dataand communications equipment room air conditioning must provide adequateoutside air to achieve the following criteria:

·Maintain the room undo positive pressure relative tosurrounding spaces.

·Dilute indoor generated pollutants such as VOCs.

·SatisfyASHRAE Standard 62.1 requirements.

.Meet local codes for ventilation fordatacom facilities in all spaces, including mechanical, uninterruptible powersupply (UPS), and battery rooms

 Theneed for positive pressure to keep contaminants out of the room is usually thecontrolling design criterion in data and communication equipment rooms.Pressurization calculations can be performed using the procedures outlined inChapter 16 of the 2009

4SHR4EHandbook-Fundamentals. Chapter 53 of thisvolume has calculation formulas for achieving pressurization as well as loss ofpressure through cracks in walls and at windows.

Although most computer rooms HAVC few occupants, calculations shouldalways be performed to ensure that adequate ventilation for human occupancy isprovided in accordance with ASHRAE Standard 62.1 and local codes. Internally generatedcontaminants may make the indoor air quality method the more appropriate procedure;however, maintaining positive pressure usually requires a higher outsideairflow.


 Inaddition to meeting state, national, and local codes, there are several other parametersthat should be considered in designing the envelope of datacom facilities, includingpressurization, isolation, vapor retardants, sealing, and condensation.

·Pressurization. Datacom facilities are typically pressurizedto prevent infiltration of air and pollutants through the building envelope. Anair lock or mantrap is recommended for a datacom equipment room door that opensdirectly to the outside. Excess pressurization with outside air should beavoided, because it makes swinging doors harder to use, and wastes energy throughincreased fan energy and coil loads.

·Space Isolation.Datacom equipment centers are usually isolated for both security andenvironmental control.

·VaporRetarders. To maintain proper relative humidity indatacom facilities in otherwise unhumidified spaces, vapor retarders should beinstalled around the entire envelope. The retarder should be sufficient torestrain moisture migration during the maximum projected vapor pressuredifference between datacom equipment room and the surrounding areas.

·Sealing. Cable andpipe entrances should be scaled and caulked with a vapor-retarding material.Doorjambs should fit tightly.

·Condensationon exterior glazing. For exterior walls in colder climates,windows should be double or triple-glazed and door seals specified to preventcondensation and infiltration. If possible, there should be no windows. If anexisting building is used, windows should be covered.


Human comfort is not specifically addressed in Thermal Guidelines forData Processing Environments (ASHRAE 2008) because the facilities typically HAVCminimal and transient human occupancy. Although telecommunications centraloffices often HAVC permanent staff working on the equipment, human comfort isnot the main objective. Following the recommended Class 1 conditions (see Table1) in a hot-aisle/cold-aisle configuration may result in comfort conditionsthat are cold in the cold aisle, and warm or even hot in the hot aisle.Personnel working in these spaces need to consider the temperature conditionsthat exist, and dress accordingly. If the hot aisle is excessively hot,portable spot-cooling should be provided. The National Institute forOccupational Safety and Health (NIOSH) provides detailed guidance on occupationalexposure to hot environments (NIOSH 1986). Another concern is contact burns ifequipment is too hot. Human tissue reaches the pain threshold at 44,and various levels of injury occur at levels about that (ASTM 2003).

Take cart that equipment surfacetemperatures do not represent a hazard.


 Asdescribed in the introduction, technology is continually changing and datacomequipment in a given space is frequently changed and/or rearranged during thelift of a datacom facility. As a result, the HVAC system serving the facilitymust be sufficiently

flexible to allow plug-and-playrearrangement of components and expansion without excessive disruption of theproduction environment. In critical applications, it should be possible tomodify the system without shutdown. If possible, the cooling system should be

modular and designed to efficiently handle awick range in heat loads


  Noise emissions in datacom facilities are the sum of datacom equipmentnoise and noise from the facility's HVAC equipment. The noise level ofair-cooled datacom equipment has generally increased along with the power densityand heat loads. Densely populated datacom facilities may run the risk ofexceeding U.S. Occupational Safety and Health Administration (OSHA) noiselimits (and thus potentially causing hearing damage without personnel protection);refer to the appropriate OSHA (1996) regulations and guidelines. Europeanoccupational noise limits are somewhat more stringent than OSHA's and aremandated in EC Directive 2003/10/ EC (European Council 2003). Facility noise levelcalculations can be made following the methodology outlined in Chapter 48, andin Chapter 8 of the 2009 ASHRAE Handbook-Fundamentals. An acoustic consultantmay be needed to properly predict sound levels from multiple sources and paths,as is typically the cast in a datacom facility.

Manufacturers of electronic equipment typically take steps to minimize acousticnoise emissions from datacom equipment. Speed control of air-moving devices,rack- or frame-level acoustic treatments, and reduction of lint-of-sight noiseemissions are common techniques to reduce datacom equipment noise.

VibrationIsolation and Seismic Restraint

 HVACequipment in datacom facilities should be independently supported and isolatedto prevent vibration transmission to the datacom equipment. If required,vibration isolators should be seismically rated for the specific environmentinto which they are installed, to comply with the appropriate cocks. Consultdatacom equipment manufacturers for equipment sound tolerance and specific requirementsfor vibration isolation. Many datacom equipment manufacturers test their equipmentto the vibration and seismic requirements of Telcordia (2001). Additionalguidance can be found in Chapter 55 and in the International Building Code (ICC2009).



  HVAC loads in datacom facilities must be calculated in the same manneras for any other facility. Typical features of these facilities are a highinternal sensible heat load from the datacom equipment itself and a correspondinglyhigh sensible heat ratio. However, other loads exist and it is important that acomposite load comprised of all sources is calculated early in the designphase, rather than relying on a generic overall "watts per square meter"estimate that neglects other potentially important loads.

Also, if the initial deployment or first-day datacom equipment load islow because of low equipment occupancy, the effect of the other loads(envelope, lighting, etc.) becomes proportionately more important in toms of part-loadoperation.


 Themajor heat source in datacom facilities is the datacom equipment itself thisheat can be highly concentrated, non-uniformly distributed, and variable.Equipment that generates large quantities of heat is normally configured withinternal fans and airflow pas- sages to transport cooling air, usually drawnfrom the space, through the equipment.

Information on datacom equipment heat releaseshould be obtained from the manufacturer. Guidance on industry heat load trendscan be obtained from ASHRAE's (2005a)Datacom Equipment Power Trends and Cooling 4pplications. It is important toknow the approximate allocation of different types of datacom equipment whendesigning datacom facility environmental control, because heat loads ofdifferent types of equipment vary dramatically. Figure 3 shows projected trendsof six equipment classifications through

the year 2014; a sample heat load calculationbased on these classifications is given in ASHRAE's (2005b) Design Considerationsfor Datacom Equipment Centers.

 Atthe equipment level, ASHRAE (2008) includes a sample equipment thermal reportthat can provide heat release information in a format specifically suited forthermal design purposes. Nameplate information for data and communicationsequipment should not be used for thermal design, because it will yieldunrealistically high design values and an oversized cooling systeminfrastructure.

 Mostcurrent datacom equipment has variable-airflow cooling fans that depend on inlettemperature and/or load. Under typical operating conditions, the flowrequirements of these fans may be low, but increase undo extreme conditionssuch as high system inlet temperature. Consideration of this variable flow maybe important to HVAC system design.

LoadConsiderations and Challenges

Similar to commercial loads, datacom loads often operate well below thecalculated load. This can be more problematic for datacom facilities, though,because the load densities are so much greater than commercial installations.Further, the source of the load (datacom equipment) is often replaced multipletimes during the lift of the cooling system, requiring consideration ofoversized infrastructure or phased construction to accommodate future changes.

The part- and low-load conditions must bewell understood and equipment selected accordingly.

 Itis particularly important to understand initial and future loads in detail.Otherwise, the stated initial and future loads could HAVC compounded safetyfactors or, in the worst casts, guesses. Datacom Equipment Power Trends andCooling 4pplications (ASHRAE 2005a)identifies the following topics to consider when predicting future load:

.Existing applications' floor space

Fig. 3 Projected Power Trends of DatacomEquipment

                       (ASHRAE 2005a)

·Performance growth of technology based on footprint

·Processingcapability compared to storage capability

·Change in applications over time

·Asset turnover

 Onceinitial and future loads are understood, as well as the part- and low-loadconditions, equipment can be selected. This includes gaining consensus from theproject stakeholders regarding acceptable amount of disruption that can occurin an operating facility for upgrades such as cooling capacity or distribution.

Ventilationand Infiltration

 Forload calculation protocols relating to ventilation and infiltration, refer toChapter 16 of the 2009 ASHRAE Handbook-Fundamental.

Datacom facilities' outdoor air requirements may be lower than otherfacilities because of the light human occupancy load. In many casts, it isadvantageous to precondition this air, with the space undo positive pressure,to allow for 100% sensible cooling in the space. If this approach is adopted,however, preconditioning system failure must also be addressed to avoid thepotential for widespread condensation in the space.


 Insome casts, power distribution units (PDUs) are located in the datacom equipmentroom as the final means of transforming voltage to a usable rating anddistributing power to the datacom equipment. The heat dissipation from the transformersin the PDUs should be accounted for by referencing the manufacturer's equipmentspecifications.


High-efficiency lighting should be encouraged, as well as lightingcontrols, to minimize lighting heat gain. Additionally, depending on the meansof fire suppression and ceiling type, unvented light fixtures should beconsidered.


Occupancy loads should be considered as light work. People often comprisethe only internal latent load in a datacom facility, which may be a factor inselecting cooling coils, especially if out-door air is supplied through adedicated outdoor air system.


 Heatgains through the building envelope depend on the buildings' location andconstruction type. More detailed design information on envelope cooling loadscan be obtained from Chapter 18 of the 2009 ASHRAE Handbook-Fundamentals.

Heatingand Reheat

 Theneed for heat in electronic equipment-loaded portions of datacom facilities istypically minimal, because of the high internal heat gains in the spaces.Still, initial or first day loads in many datacom facilities can be low becauseof low equipment occupancy, so sufficient heating capacity to offset theoutdoor air and envelope losses should be included in the design.

 Manycomputer room air-conditioning (CRAG) units include reheat coils for humiditycontrol. Reheat use for humidity control must be carefully monitored andcontrolled, because simultaneous heating and cooling wastes energy.


  Humidification and/or dehumidification is needed in most environments tomeet both the recommended and allowable humidity ranges specified for Class 1and Class 2 data centers (see Table 1).In most casts, the predominant moistureload is outdoor air, but all potential loads should be considered. Wherehumidification is not provided, personal grounding is typically utilized tominimize electrostatic discharge (ESD) failures.

Vapor retardant analysts should also be performed where humidity-controlledspaces contain outside walls or ceilings. Refer to Chapter 27 of the 2009 ASHRAEHandbook-Fundamentals for additional design information.


 Heatdensity of some types of datacom equipment is increasing dramatically.Stand-alone server heat loads (or rack loads) can HAVC heat loads exceeding 30kW. These increased heat densities require a design engineer to keep abreast oflatest design techniques to ensure adequate cooling, and to pay close attentionto the loads of installed electronic equipment and future equipmentdeployments.

Ensure that local high-density loads are provided with adequate localcooling, even when the overall heat density of the general space is below thehigh-density threshold. Rack inlet conditions should be checked and verified asadequate to meet the manufacturer's requirements. Refer to Thermal GuidelinesrData Processing Environments (ASHRAE 2008) for additional information andguidance.

Increases in heat density HAVC made it more difficult to air-cool computers,leading to increased interest in efficient liquid-cooling techniques. Liquidcooling media include water, refrigerants, high-dielectric fluorocarbons, ortwo-phase fluids such as dielectrics. Some computer manufacturers HAVC already takenthis approach, and there are also products available that use liquid cooling atthe cabinet level. The reader is encouraged to keep abreast of researeh anddevelopment in this area. ASHRAE (2006) also published a book on liquidcooling, Liquid Cooling Guidelines for Datacom Equipment Centers.

 Newdatacom facilities and those slated for major renovation should consider addingappropriate infrastructure (piping taps, feeders, etc.) for future use and loadincreases. Retrofitting these "backbones" is typically much moreexpensive, disruptive, and risky.

 Notethat local high-density areas can HAVC power densities significantly higherthan the average for the center. Ideally, high-density computing equipmentshould be identified during design so appropriate cooling can be provided.


 Itmay be desirable for H VAC systems serving datacom facilities to be independentof other systems in the building, although cross-connection with other systemsmay be desirable for back-up. Redundant air-handling equipment is frequentlyused, normally with automatic operation. A complete air-handling system should provideventilation, air filtration, cooling and dehumidification, humidification, andheating. Refrigeration systems should be independent of other systems and maybe required year-round, depending on design.

Datacom equipment rooms can be conditioned with a wick variety of systems,including packaged computer room air-conditioning units and central-stationair-handling systems. Air-handling and refrigeration equipment may be locatedcither inside or outside datacom equipment rooms.

ComputerRoom Air-Conditioning (CRAC) Units and Computer Room Air-Handling Units (CRAH)

 CRAGand CRAH units are the most common datacom cooling solution. They are specificallydesigned for datacom equipment room applications and should be built and testedin accordance with the requirements of ANSI/ASHRAE Standard 127.

 Cooling. CRAH units are special-purposechilled-water air handlers designed for datacom applications. CRAG units areavailable inseveral types of cooling systemconfigurations: direct expansion (DX) air-cooled, DX water-cooled, DXglycol-cooled, and dual-cooled (both chilled-water and DX). DX units typically HAVCmultiple refrigerant compressors with separate refrigeration circuits. Both CRAHand CRAG units HAVC air filters, and integrated control systems with remotemonitoring panels and interfaces. Reheat coils, variable-speed fan controls,and humidifiers are an option. Where weather conditions make this strategyeconomical, CRAG units may also be equipped with propylene glycol precoolingcoils and associated dry coolers to allow water-sick economizer operation, ormay also be equipped with mixing boxes to allow air-sick economizer operation.

Fig. 4 Datacom Facility with DedicatedOutdoor Air Preconditioning

Location. CRAG/CRAH units are usuallylocated within the datacom equipment room, but may also be remotely located andducted to the conditioned space. With either placement, their temperature andhumidity sensors should be located to properly control inlet air conditions tothe datacom equipment within specified tolerances (sec Table 1). Analysis ofairflow patterns in the datacom equipment room [e.g., with computational fluiddynamics (CFD)] may be required to optimally locate datacom equipment,CRAG/CRAH units, and sensors, to ensure that sensors are not in a location thatis not conditioned by the CRAG/CRAH unit they control, or in a nonoptimumlocation that forces the cooling system to expend more energy than required.

 Humidity Control. Types of availablehumidifiers within CRAG/CRAH units may include steam, infrared, and ultrasonic.Consideration should be given to maintenance and reliability of humidifiers. Itmay be beneficial to relocate all humidification to a dedicated central system.Another consideration is that some humidification methods or improperly treatedmakeup water are more likely to carry fine particulates to the space.

   Reheatis used in dehumidification mode when air is overcooled to remove moisture. Ona call for reheat, sensible heat (typically from electric, hot-water, or steamcoils) is introduced to supplement the actual load in the space. Using wasteheat of compression (hot gas) for reheat may also be an energy-saving option.This overcooling and reheating should be tightly controlled.

Ventilation.Dedicated outdoor air systems HAVC been installed in many datacomfacilities to control space pressurization and humidity without humidifiers andreheat in either the CRAG units or other datacom central cooling systems.Figure 4 shows an independent outdoor air preconditioning system in conjunctionwith a sensible-only recirculation system. The humidifier in the dedicatedoutdoor air system often controls the humidity in the datacom equipment roombased on dew point.

Central-StationAir-Handling Units

 Somelargo datacom facilities and most telecommunications central offices usecentral-station air-handling units. Some of their advantages and disadvantages arediscussed here.

 Coil Selection and Control.

A wick range of heating and cooling coiltypes can be used for datacom facilities, but, ideally, any coil design orspecification should include modulating control. In addition, whendehumidifying, control of the cooling coil to maintain dew point can becritical to maintaining the datacom facility within temperature and humidityset points. For more information on cooling coil design, see Chapter 22 in the 2008 ASHRAE Hand-book-HV4C Systems and Equipment.


Various types of central-stationhumidification systems can be used for datacom facility applications, with eachtype offering varying steam quality, level of control, and energy consumption.Available water quality and requirements for water treatment must also beconsidered when selecting the humidifier type.

 Flexibility and Redundancy using VAV Systems.

Flexibility and redundancy can beachieved by using variable-volume air distribution, over sizing,cross-connecting multiple systems, or providing standby equipment. Compared toconstant-air-volume units (CAV), variable-air-volume (VAV) equipment can besized to provide excess capacity but operate at discharge temperatures or airflowrates appropriate for optimum temperature and humidity control, reducingoperational fan power requirements and the need for reheat.

Common pitfalls of VAV include shifts in under floor pressure distributionand associated flow through tilts. Airflow should be modeled using CFD or otheranalytical techniques to ensure that the system can modulate without adverselyaffecting overall airflow and cooling capability to critical areas.

Chilled-WaterDistribution Systems

Chilled-water distribution systems should be designed to the samestandards of quality, reliability, and flexibility as other computer roomsupport systems. Where growth is likely, the chilled water system should bedesigned for expansion or addition of new equipment without extensive or disruptiveshutdown. Figure 5 illustrates a looped chilled-water system with sectionalvalves and multiple valued branch connections. The branches could serve air handlersor water-cooled computer equipment.

 Thevalve quantity and locations allow modifications or repairs without completcshutdown because chilled water can be fed from either sick of the loop. Thisloop arrangement is a practical method of improving the reliability of achilled-water system serving a computer room. "Future taps" should HAVCblind flanges with a pressure gage and drain between the flange and isolationvalve to allow the valve to be exercised and checked for holding performance. Sectionalvalves should be suitable for bidirectional flow and tight shut-off from flowin either direction, to allow maintenance on either sick of the valve. In somecasts, multiple valves may be required to allow maintenance of the valves themselves.

 Wherechilled water serves CRAG units or other packaged equipment in the datacom equipmentroom, select water temperatures that satisfy the space sensible cooling loadswithout causing latent cooling. Because datacom equipment room loads are primarilysensible, chilled-water supply temperature can be higher than in commercialapplications. Greater differentials between the supply and return chilled-watertemperatures allow reduced chilled-water flow, which saws pump energy andpiping installed costs.

 Toprovide better temperature control of datacom equipment in a data center,numerous manufacturers offer products where liquid (water or refrigerant) isbrought close to the datacom rack and used to remove the heat generated by thedatacom equipment. Liquid- cooled heat exchangers placed in strategic locationsare used to cool hot air exhausted from the datacom equipment, thereby removingeither all or part of the equipment's heat load.

Chilled-water pipe insulation with a vapor barrio is required to preventcondensation, but not to prevent thermal loss in a cold plenum; therefore,minimum insulation thickness should be considered, because insulated piping canrestrict under floor air distribution.

Fig. 5 Chilled-Water LoopDistribution


 Heatrejection in datacom facilities can be with either water-cooled or air-cooledsystems. Basic information on condenser water systems can be obtained fromChapter 13 in the2008 ASHRAE Handbook-HV4CSystems and Equipment.  Where evaporativecooling or open-cell cooling towers are used, consider using makeup waterstorage as a back-up to the domestic water supply (which provides condensermakeup water).

 The"dry-cooler" system incorporates a closed glycol piping loop, whichtransfers heat from a unit-mounted condenser to an out-door-air-to-glycol heatexchanger. The same glycol loop is sometimes attached to an economizer coolingcoil, installed in the air stream of the CRAC unit, which allows for partialfree cooling when the glycol loop temperature is below the unit's return air temperature.

Air-cooled systems generally support CRAC units with built-in refrigerationcompressors and evaporating coils. These systems reject heat to remoteair-cooled refrigerant condensers. Air-cooled and dry-cooler systems eliminatethe need for makeup water systems (and back-up makeup water systems). Coolingtowers, dry coolers, etc., need the same level of redundancy and diversity requiredof the chillers and other critical infrastructure.


Air-conditioning systems should be designed to match the anticipatedcooling load and be capable of expansion if necessary; year-round, continuousoperation may be required. Expansion of air-conditioning systems whilemaintaining continuous operation of the data center may also be necessary. Aseparate system for datacom equipment rooms) may be desirable where systemrequirements differ from those provided for other building and process systems,or where emergency power requirements preclude combined system.


Because datacom facilities often use large quantities of energy, coolingsystems should be designed to maximize efficiency. For many facilities,water-cooled chillers are likely the most efficient system. Basic informationon chillers can be obtained from Chapter 42 of the 2008 ASHRAE Handbook-HV4C Systems and Equipment.

Part-load efficiency should also be considered during chiller selection,because data centers often operate at less than peak capacity. Chillers withvariable-frequency drives, high evaporator temperatures, and low enteringcondenser water temperatures can HAVC part-load operating efficiencies of 0.1kW per kilowatt of cooling or less. The relative energy efficiency of primaryversus secondary pumping systems should also be analyzed to optimize energyconsumption.

 Therecommended data center air temperature may allow higher chilled-watertemperatures. Chiller and chiller plant efficiencies can be enhanced by properselection of the chilled-water supply and return temperatures.

 Heatrecovery chillers may be an efficient way to recover heat from datacomequipment environments for use in other applications. The heat recovery systemmust provide the reliability and redundancy needed by the facility. Systemoperation, servicing, and maintenance should not interfere with facilityoperation.


Pumps and pumping system design should take into account energyefficiency, reliability and redundancy. It may be possible to design pumps withvariable-speed drives, so that the redundant pump is always operational.Ramp-up to full speed occurs on a loss of an operating pump.

Basic information on pumps can be obtained from Chapter 43 of the 2008 ASHRAEHandbook-HVAC Systems and Equipment, and piping systems are covered in Chapter12 of that volume.


Chilled-water and glycol piping must be pressure-tested, fully insulated,and protected with an effective vapor retardant. The test pressure should beapplied in increments to all sections of pipe in the computer area during construction.In new construction, piping is often installed in trenches below the raisedfloor to minimize its effect on air distribution. Typically, leak detcction isprovided along the piping path. When installed overhead, secondary containmentis often provided for all piping in datacom equipment room or critical electricalsupport spaces. Secondary containment systems should incorporate leak detcctioncapability, to detcct condensation and identify leaks from damaged piping,valves, fittings, etc. Leak detcction should also be placed wherever waterpiping passes through any critical space, regardless of pipe elevation.

Piping specialty considerations should include a good-quality strainerinstalled at the inlet to local cooling equipment to prevent control valve andheat exchanger passages from clogging. Strategically placed drains and ventsmust be included locally at all equipment. Thermometcrs and other sensorsshould be installed in a serviceable manner, such as in drywells. Pressuregages should include gage cocks.

 Ifcross connections with other systems are made, the effect of introducing dirt,scale, or other impurities on datacom equipment room systems must be addressed.


 Manytypes of humidifiers may be used to strut datacom equipment areas, includingsteam-generating (remote or local), pan (with immersion elements or infrared lamps)and evaporative types (wetted pad and ultrasonic). Ultrasonic devices shoulduse deionized water to prevent formation of abrasive dusts from crystallizationof dissolved solids in the water. In general, cart must be taken to ensure thatparticulates or chemicals corrosive to datacom equipment are not used.

 Thehumidifier must be responsive to control, maintainable, and fret of moisturecarryover. The humidity sensor should be located to provide control of airinlet conditions to the equipment. For additional information, see Chapter 21of the 2008 ASHRAE Hand-book-HVAC Systems and Equipment.

Controlsand Monitoring

 Controls. Control systems must be capableof reliable control of temperature, relative humidity, and, where required,pressurization within tolerance from set point. Control systems serving spaces requiringhigh availability must be designed so that component or communication failuresdo not result in failure of the controlled HVAC equipment.

 Thereare a number of ways to accomplish this, but the general approach is to usemultiple distributed control systems in a manna such that no system can causethe failure of another system. Where required, HVAC components and their powersupplies should HAVC dedicated controllers installed to ensure automatic andindependent operation of redundant HVAC systems in the event of failures. In manydesigns, electrical power for HVAC controls must be from a UPS to maintainproper system operation during interruption of normal power.

Based on Table 1, control should be established that provides an inletcondition to data center equipment and telecommunications equipment of 18 to 270C. Cart is needed to ensure thatsensors are properly located, tuned, and calibrated, especially if converting fromlegacy control based on return air temperature. CFD analysis as well as controlsystem simulation may be needed for a successful retrofit.

 Wheremultiple packaged units are provided, regular calibration of controls may alsobe necessary to prevent individual units from working against each other.Errors in control system calibration, differences in unit set points and sensordrift can cause multiple-unit installations to simultaneously heat and cool,and/or humidify and dehumidify, wasting a significant amount of energy. Also considerintegrated control systems that communicate from unit to unit, sharing setpoints and sensor data to ensure coordination, and reduce the potential forunits to work against each other. Lead/lag control could also be used, if desired.

 Monitoring. Datacom facilities often requireextensive monitoring of the mechanical and electrical systems. Multiple interfacegateways are often used to interface different monitoring and control systemsto the head-end monitoring system and ensure that failure of individual systemcommunication components dots not remove access to the total system informationdatabase.

Monitoring should include control system sensors as well as independent"monitoring-only" sensors and should include datacom equipment areas,critical infrastructure equipment rooms, command/network operations centers, etc.,to ensure critical parametcrs are maintained. Monitoring also should be sufficientto ensure that

anomalies are detccted early and with adequatetime to allow operating staff time to mitigate and restore conditions before equipmentis affected. Monitored data can facilitate trending, alarming, and troubleshootingefforts.

Examples of suitable parametcrs for monitoring include under-floor staticair pressure, temperature, and humidity; early-warning smoke detcction; ground currents;and rack inlet temperatures and humidity. Monitoring systems can be integral toor separate from control systems and can be as simple as portable data loggersor strip chart recorders or as complex as high-speed (GPS-synchronized) forensictime stamping of critical breakers and status points. New technologies allowdistributed monitoring sensors to be connected to the data and communication networkwithout separate wiring systems.

 Becausedatacom equipment malfunctions may be caused by or attributed to improper controlof the datacom room environmental conditions, it may be desirable to keep permanentrecords of the space temperature and humidity. Many datacom equipment manufacturersimbed temperature and humidity sensors in their equipment, which in turn can becorrelated with equipment function and also provide for reduced-capacity operationor shutdown to avoid equipment damage from overheating. In the future it may bepossible to connect IT sensors to building systems for monitoring and control.

 As aminimum, alarms should be provided to signal when temperature or humiditylimits are exceeded. Properly maintained and accurate differential pressure gagesfor air-handling equipment filters can help prevent loss of system airflow capacityand maintain design environmental conditions. All monitoring and alarm devices shouldprovide local indication as well as interface to the central monitoring system.


 Toprovide effective cooling, air distribution should closely match loaddistribution. Distribution systems should be flexible enough to accommodate changesin the location and magnitude of heat gains with minimal change in the basicdistribution system. Distribution system materials should ensure a clcan airsupply. Duet or plenum material that may erode must be avoided. Access should bemaintained for clcaning or replacement as needed.

EquipmentPlacement and Airflow Patterns

 Datacom Equipment Airflow Protocols.Datacom equipment is typically mounted in racks or cabinets arranged in rows.In a typical configuration, the "front" of cabinets, racks, or frames(i.e., the sick with the air inlets) facts one aisle, and the rear, which includescable connections, faces another aisle. The cabinets or racks in a datacom environmentare usually 1.98 mhigh, whereas telecommunications frames are generally 1.83 or 2.13 m high. Each cabinet or rack may containa single piece of equipment, or it may contain any number of individual itemsof equipment, in sizes as small as 1 U, where 1 U=44 mm (EIA Standard 310).

 Typically,supply air is drawn into the inlet of the datacom equipment cabinet or rack, picksup heat internal to the equipment, and is then discharged, typically from adifferent sick of the equipment. The air then trawls back to the HVAC cooling coil,where the heat is rejected.

 To cooldatacom equipment efficiently and effectively, there needs to be complementarydirectivity for airflow through the equipment and airflow through the datacom equipmentroom. ASHRAE's Thermal Guidelines for Data Processing Environments (ASHRAE2008) and Telcordia (2001) define recommended air-flow protocols throughdatacom equipment. Figure 6 shows the three communications equipment airflowprotocols that are recommended for use in datacom facilities. The front-to-rear(F-R) protocol has cool air entering the front of the equipment rack (orcabinet), and exiting the rear. The F-T protocol has cool air entering thefront of the equipment cabinet, and exiting the top. The front-to-top-and-rear(F-T/R) protocol has cool air entering the front of the equipment and exitingboth the top and the rear. Rack-mounted equipment should follow the F-Rprotocol only. Cabinet-mounted systems can follow any of the three shown.

Fig. 6  Recommended Equipment Airflow Directivity

Other airflow protocols for rack-mounted datacom equipment directairflow through the left and/or right sides of the equipment within the rack.For these installations, airflow within the rack must be managed to ensurecomplementary directivity of airflow between independent shelves of data andcommunications equipment. In addition, spacing between adjacent racks in thesame lint-up of equipment must be adequate to ensure the appropriatesegregation of hot exhaust and cold intake air streams.

 Hot Aisle/Cold Aisle Configuration.Using alternating hot and cold aisles promotes separation of the cool supplyand warm return streams which generally leads to lower equipment inlet temperaturesand greater energy efficiency. Figure 7 shows a schematic view of a hotaisle/cold aisle configuration.

Fig. 7 Schematic of Hot-Aisle/Cold-AisleConfiguration

UnderfloorPlenum Supply

Datacom facilities often use an underfloor plenum to supply cooling airto the equipment. As shown in Figure 8, the CRAG units push cold air into theplenum, from which it is introduced into data and communications equipment roomsvia perforated floor tilts tilt cutouts, and other openings. The raised-floor designoffers flexibility in placing computer equipment about the raised floor. Coolair can, in theory, be delivered to any location simply by replacing a solidfloor tilt by a perforated tilt.

 Witha hot-aisle/cold aisle configuration, perforated tilts are placed in the coldaisle. Cool air delivered by the perforated tilts is drawn into the front ofthe racks. Warm air is exhausted from the back of the racks into the hot aisleand is ultimately returned to the ERAG units.

 Often,the underfloor plenum is used for cables, electrical conduits, and pipes. Theseobstructions in the plenum can interfere with airflow. When detcrmining plenumdepth, below-floor obstructions must be considered.

Airflow Inlet Delivery Concerns. For goodthermal management, required airflow must be supplied through the perforated tile(s)located near the inlet of each piece of datacom equipment. The heat load canvary significantly across datacom equipment rooms, and changes with addition orreconfiguration of hardware. For datacom equipment to operate reliably, the designmust ensure that cool air distributes properly (i.e., the distribution ofairflow rafts through perforated tilts matches the cool-air needs of equipmenton the raised floor).

 Whenadequate airflow is not supplied through the perforated tilts, internal fans inthe equipment racks tend to draw air through the front of the cabinet from thepath of least resistance, which typically includes the space to the sides of andabove the racks. Because most of this air originates in the hot aisle, its temperatureis high. Thus, cooling of the sides and upper portion of the equipment racks canbe seriously compromised.

Fig. 8 Schematic of Datacom Equipment Roomwith Underfloor Plenum Supply Air Distribution

 Airtends to stratify, with cold supply air near the floor and hot air near the ceiling,with a temperature gradient between. High discharge air velocity through floortilt or grafts is necessary to displace warm air near the highest intakes.Floor grafts can be useful, because of their high mass flow discharge rafts.

Pressure Variations. Distribution of coolairflow through perforated tilts is governed by the fluid mechanics of the spacebelow the raised floor and not the large, visible, about-floor space. More specifically,the static pressure and air movement in the proportionately small underfloorspace detcrmines how much air flows through each perforated tilt. Measurementsfrom hundreds of datacom facilities confirm that flow rafts from perforatedtilts typically vary considerably, depending on their proximity to the ERAGunit.

Further, the pattern of airflow distribution is somewhat counter-intuitive.More flow might be expected through tilts near the CRAC unit, and less awayfrom it. In reality, there is typically very little flow near the ERAG, and greaterflow through the perforated tilts located far away. Consequently, IT equipmentplaced near the CRAC often dots not get much cool air.

 Theflow rate through a perforated tilt depends on the pressure difference acrossthe tilt (i.e., the difference between the plenum static pressure just belowthe tilt and the room static pressure about the raised floor). Pressurevariations in data and communications equipment rooms are generally small comparedto the pressure drop across the perforated tilts. The tilts are fairly restrictive(e.g., 25% or less open area). When substantial numbers of tilts with greater openarea are used, airflow through the tilts may also depend on air-flow dynamicsabout the raised floor; EFD analysis or physical measurements may be requiredto ensure that a design meets equipment airflow requirements.

 Undersome conditions, nonuniformity of airflow distribution is so severe that perforatedtilt airflow is directed from the room down into the floor plenum. This effectis caused when most of the CRAC unit fan's total pressure is transformed into velocitypressure by high velocity in a relatively shallow underfloor plenum. Thishigh-velocity underfloor air can create a localized negative pressure and inductsmall quantities of room air into the underfloor plenum. Asdistance from the supply fan increases, velocitydecreases and the velocity pressure is converted to static pressure, which is requiredto product airflow through perforated tilts or grates.

Other Factors Affecting Airflow Distribution.Other factors that influence distribution of airflow through perforated tiltsinclude the following:

·Heightof raised-floor plenum

·Percentage of open area of perforated tilts

·Location and sizeof leakage airflow paths

·Locations and redundancy of CRAC units

·Correspondingspreading of underfloor flow to various perforated tilt Locations

·Collision or merging of airstreams from different CRACs

·Flow disturbance caused by underfloor blockages such aspipes and cable trays

 Thereis a common misconception that using more open tilts increases the airflow rate.Obviously, for the same static pressure in the plenum, more open tilts productmore airflow than more restrictive tilts. However, static pressure in the plenumcannot be assumed to be constant: it is a result of the tilts' flow resistanceand other factors. The airflow rate is controlled by the amount of flow the CRAC

Fig. 9 Typical Ducted Ceiling DistributionUsed in Datacom Facilities unit blower is able to supply. For the blower, thecontrolling resistance is primarily internal to theCRAG unit, and the additional flow resistance offered by theperforated tilts is insignificant. Lowering perforatedtilt resistance typically dots not significantly increaseoverall flow.

 Whenvery restrictive tilts (e.g., 5% open) are used, theirflow resistance can influence the flow delivered by the ERAG unit blower. In thiseast, plenum pressure becomes high enough that the bloweroperates at a new position on the fan curve. The needfor very restrictive tilts should thus be avoided, if possible, although they couldbe necessary with low floor heights where the plenum is restricted and underfloorpressure distribution variation is high.

Using more restrictive tilts leads to a more uniform airflow distribution,but increases underfloor static pressure and may drive more airflow through leakagepaths. Instead of using restrictive tilts everywhere, selective use of restrictiveand open tilts in different locations can obtain the desired airflowdistribution. To make flow distribution uniform, it is typically necessary to increasethe percentage open area of tilts near the ERAG unit and to decrease it furtheraway from the ERAE unit.

Finally, underfloor obstructions can cause significant airflow variationsin the underfloor plenum. A raised-floor datacom facility usually contains pipes,cable trays, and structural columns. These obstructions disturb the airflowpattern undo the floor, influence pressure distribution, and thus affectairflow coming out of the perforated tilts.

   Becauseobstructions reduce the area available for flow, air velocity increases and Leadsto more significant pressure changes. Usually, static pressure increases on theupstream side of an obstruction and decreases on the downstream side, with thelowest static pressure at the point with highest velocity. Because of this effect,two adjacent perforated tilts located about an obstruction may yield very differentairflow rates.

 Whenredundant (standby) ERAG units are provided, air distribution in the datacomroom changes depending on which units are operating.

Another consideration is the effect of dampers on the performance of perforatedtilts and floor grates. Typically, dampers are two slotted metal plates installedon the bottom surface of a tilt or grate, creating a plenum between the damperand discharge surface. Even in the full open position, there is a pressure dropacross the damper, reducing static pressure across the discharge surface and resultingin lower air velocities than tilts or grates without dampers. Low discharge airvelocity can cause problems in supplying cooling air to the highest air intakesof tall racks or cabinets.

Overheadand Ceiling Plenum Supply

 Aswith an underfloor plenum supply, the hot-aisle/cold-aisle configuration shouldalso be used with an overhead supply. Currently, overhead cooling methods aremore typically found in telecommunications facilities.

DuctedSupply. Overhead ducted supply, shown in Figure 9, can be usedin datacom facilities without a raised-floor plenum.The vertical overhead (VOH) system is currently the typical and preferred configurationof the large regional phone companies, although this is changing to accommodatenew technologies. This system can satisfy equipment and personnel comfort requirements,and can be fairly easily balanced to supply air to meet the distributed heatgain in the equipment room.

 The verticaloverhead supply system is typically limited to a cooling capacity of 1400 W/(Telcordia2001) in mature telecommunications central offices, where the definition ofwatts per square metcr is the average heat release of the datacom equipment ina 6.1 by 6.1 m building bay.This limitation stems from the large physical duet sizes required near the Ceiling.Air distribution is often affected by large overhead cabling installations. Forcooling capacities exceeding 1400 W/, new facilities with verticaloverhead supply systems may incorporate some type of aisle containment. Aisle containmentcan be either hot-aisle or cold-aisle; both HAVC physical barrios to preventmixing of the cooling air stream from the return air path between the electronicequipment and the air handlers.

 Local and Supplemental CoolingDistribution. Recently, cooling systems HAVC been developed in which small,modular cooling units are located close to the equipment racks, either withinthe rows of racks themselves, on top of racks, or mounted on the Ceiling directlyover a cluster of racks. An extreme example is a sealed rack system thatprovides its own cooling and has no heating effect on air inside the data center.Local cooling units may provide all the cooling for a data center or be used tocool specific areas in the data center, typically higher-density areas.

 Localcooling systems can be incorporated into new construction, and are also a goodsupplemental cooling approach to consider for retrofit applications whereexisting facilities lack sufficient infrastructure to expand their base coolingcapacity. Supplemental cooling equipment may be located in or on a rack and includeproducts that increase cool-air supply to a rack, capture hot-air exhaust froma rack, or partially cool air leaving the rack.

 Indesigning and implementing local cooling, it is important to perform a thoroughanalysis to ensure that a cooling solution to one part of the facility dots notcreate a problem in another part of the facility.

 Other Overhead Cooling Strategies.Other cooling strategies are used, predominantly in telecommunications central offices.Horizontal displacement (HDP)air-distribution, mainly used in Europe and Asia,introduces air horizontally from one end of a cold aisle. A large volume ofslightly cooled air mows along the aisle with low velocity. Subsequently, the electronicequipment draws necessary cold air from the cold aisle. However, this systemrequires more floor space to accommodate the displacement of the large diffusers.

 Somelong-distance carriers in North America use horizontal overhead (HOH) airdistribution. This system introduces supply air horizontally about the coldaisles, and is generally used where raised floors are used for cabling.

 Finally, naturalconvection overhead (NOH) air distribution, not commonly used, suspends coolingcoils from the Ceiling. Because the coils cool the hot air as it rises (becauseof buoyancy), there are no fans or ducting in this strategy.

 Formore information on different air-cooling strategies for telecommunications centraloffices, including cooling capacities, see Telcordia (2001).

Return Air

Return air paths discussed here can be used either with under-floor oroverhead supply air systems. Most return air in datacom facilities is not Ducted(i.e., heat discharged from datacom equipment enters the datacom equipment roomat large and finds a path-way back to a large common return grille or to theinlet of a CRAC unit). This can be effective, but the opportunity for inefficiency,in some circumstances, is great because of potential short-circuiting andmixing of supply and return air. In addition, it is possible to draw hotequipment exhaust from one piece of equipment into the inlet of adjacentequipment. An example of how air can potentially short-circuit with an underfloorsupply and unducted return configuration is highlighted in Figure 8.

Using Ceiling plenums is an option for return air. Inlets should be locatedabout hot aisles or datacom equipment with high heat dissipation to takeadvantage of the thermal plume created about the equipment. Ceiling plenumreturns can capture part of the heat from data and communications equipment andlights directly in the return air stream. Assuming that the space is allowed tostratify, the return air being at a higher temperature than the average spacetemperature creates a higher temperature difference across the cooling coil, therebyallowing a reduction in airflow to the space.

ComputationalFluid Dynamics Simulation

  Air is the main carrier of heat and moisture in data centers. It is challengingbut important to optimize the flow paths of both cold supply air and hot returnto minimize mixing of these two streams as well as reduce any short-circuitingof cold air back to the air-conditioning systems.

Several factors affect airflow distribution and cooling performance of adata center. Physical measurements and field testing are not only time andlabor intensive but sometimes impossible. In such a situation, computational fluiddynamics (EFD) simulations provide a feasible alternative for testing variousdesign layouts and configurations in a relatively short time.

 EFDsimulations can, for example, predict air velocities, pressure, and temperaturedistribution in the entire data center facility; assess airflow patterns aroundracks and identify areas of recirculation; and provide a detailed cooling auditreport for the facility, including performance evaluation of individualcomponents such as air-conditioning units, racks, perforated tilts, and anyother supplementary cooling systems in the facility.

 Facilitiesmanagers, designers, and consultants can use these techniques to estimate the performanceof a proposed layout (with or without performance metrics) before actuallybuilding the facility. Likewise, EFD simulations can provide appropriateinsight and guidance in reconfiguring existing facilities optimize the cooling andair distribution system. For details on performing EFD, see Chapter 13 of the 2009ASHRAE Handbook Fundamentals.

                   ANEILLARY SPACES

 Spacemust be allocated within a datacom facility for storing components andmaterial, support equipment, and operating and servicing the datacom equipment.Some ancillary spaces may require environmental conditions comparable to thoseof the datacom equipment, whereas others may HAVC less stringent requirements.Component and material storage areas often requireenvironmental conditions comparable to those of the datacom equipment. Support equipmentoften has substantially less stringent environmental requirements, but its continuousoperation is often vital to the facility's proper functioning.

ElectricalPower Distribution and Conditioning Rooms

 Electrical Power Distribution Equipment.Electrical power distribution equipment can typically tolerate more variationand a wider range of temperature and humidity than datacom equipment. Equipmentin this category includes incoming service/distribution switchgear, switchboard,automatic transfer switches, panel boards, and transformers. Manufacturers'data should be cheeked to determine the amount of heat release and design conditionsfor satisfactory operation. Building codes should be checked to identify when equipmentmust be enclosed to prevent unauthorized access or housed in a separate room.

 Uninterruptible power supplies (UPSs) comein various eon- figurations, but most often use batteries as the energy storagemedium. They are usually configured to provide redundancy for the central powerbusts, and typically operate continuously at Less than full-load capacity. Theymust be air-conditioned with sufficient redundancy and diversity to provide anoperable system throughout an emergency or accident. The relationship betweenload and heat release is usually nonlinear. Verification with the equipment vendoris necessary to properly size the HVAC system.

 UPSpower monitoring and conditioning (rectifier and inverter) equipment is usuallythe primary source of heat release. This equipment usually has self-contained coolingfans that draw intake air from floor Level or the equipment fact and dischargeheated air at the top of the equipment. Air-distribution system design shouldtake into account the position of the UPS air intakes and discharges.


Installation of secondary battery plants as a temporary back-up power sourceshould be in accordance with NFPA Standard 70; IEEE Standard 1187 and otherapplicable standards should also be referenced, in addition to a design reviewwith the local code official. Other relevant sources of guidance are NFPAStandards 70E and 76.

 Mostcodes require 0.005 m3/(sm2) of forced exhaust from a battery room when hydrogen detectors are not used.The exhaust fans) (typically Ceiling-mounted) must run continuously. When hydrogendetectors are used, the exhaust is also sized for 0.005 m³/

(s.), but the fan(s) only need to run whenhydrogen is detected.

 Becauseof the potential high hazard associated with hydrogen gas build-up, batteryroom exhaust systems should be designed with redundancy and failure alarms. Best practice design includesboth continuous fan operation and both "high" and"high-high" alarms in case one sensor goes out of calibration. If thesystem operates when hydrogen gas is detected, or if combustible concentrationsof gas are expected, explosion proof motors and/or other provisions may be requiredto address an explosion hazard.

Nonsparking fan wheels may be required by code, and in any east arehighly recommended. For belt-drive systems, the fans should be equipped with controlsthat periodically exercise them to keep the belts pliable and therefore more reliable.

 Temperaturein a battery area is crucial to the lift expectancy and operation of the batteries.The optimum space temperature for lead-calcium batteries is 250E. If higher temperaturesare maintained, it may reduce battery lift; if lower temperatures are maintained,it may reduce the batteries' ability to hold a charge (IEEE Standard 484).

 Battery rooms should be maintained at a negative pressureto adjacent rooms and exhausted to the outside to prevent migration of fumes,gases, or electrolytic spray to other areas of the facility. It may be possibleto provide makeup air from an adjacent datacomarea,thereby eliminating the need for a separate HVAC system for the battery room,if temperatures are compatible. Battery rooms typicallyonly HAVC small heat-producing loads.

 Battery rooms may require emergency eyewashes andshowers. If so, these systems should include leak detection and remote alarm capabilitiesto alert staff of a possible leak or that an accident has occurred and emergencyfirst aid is required.


Engine-driven generators used for primary or emergency power requirelarge amounts of ventilation when running. This equipment is easier to start ifa low ambient temperature is avoided. Low-temperature start problems are often reducedin cold climates by using engine block heaters. Design should ensure thatexhaust air dots not recirculate back to any building ventilation air intakes.

Spring-return motorized dampers are typically provided on air inlets anddischarges and maintained normally closed when power is available to the damperactuator. Damper actuator signals are generally from the generator electricalgear as opposed to the building management system (BMS).

 Whereacoustical concerns exist, measures may need to be taken both inside theengine/generator room to meet the appropriate OSHA regulations and guidelines [e.g.,OSHA (1996)] and on the air intake/discharge openings if the site is near an acousticallysensitive property lint.

Burn-InRooms and Test Labs

 Manydatacom facilities incorporate a dedicated area for the purpose of assembling, configuringand testing datacom equipment before deployment in the production environment.These areas can be used for testing equipment power supplies, dual-power capabilities,actual power draw, and cooling requirements, as well as for equipment applicationstesting (both software and hardware functions).

 Itis recommended that these areas be adjacent to production areas for convenience,yet separated with respect to power, cooling, and fire protection to prevent apower problem or fire from affecting the production environment.

DatacomEquipment Spare Parts

 Aspare parts room may require immediate use of parts for equipment repair.Therefore, the temperature of the space should be similar to that of theoperating data center. ASHRAE's Thermal GuidelinesrData Processing Environments (ASHRAE 2008) provides allowable temperatures for"product power-off" conditions that include a spare parts roomenvironment.

Storage Spaces

Storage spaces for products such as papa and tapes generally require conditionssimilar to those in data and communications equipment rooms, because these productsabsorb moisture from the air and can expand, contract, or change shape morethan electronic equipment. Close-tolerance mechanical devices, such as paper feedersor tape drives, are also affected by room relative humidity.



Automatic fire extinguishing and smoke control systems afford the highestdegree of protection and must be provided in accordance with the applicablenational codes, local codes, and the owner's insurance underwriter. Some new codereferences require fire protection below the raised floor.

Exhaust Systems. Exhaust systems may be provided to ventilate datacom equipmentrooms in the event of a chemical fire suppression system discharge or as requiredfor smoke purge. Locating the exhaust pickup point below a supply plenum floorpromotes quick purging of the space.

There is no need for a purge system when datacom equipment rooms areonly protected with sprinklers unless required by code. Even so, the ability topurge datacom space (including critical infrastructure rooms) can minimize the effectsof combustion contaminants on datacom equipment, regardless of the type ofsuppression system.

 Forsmall computer rooms that use DX compressor-based cooling systems, local codes mayrequire alarm and exhaust in the event of loss of refrigerant. The volume ofrefrigerant within a system should be checked against local code requirements.

 Fire Smoke Dampers. When motorized fire/smokedampers are installed to seal a clean-agent-protected room, they must be the spring-loadedtype, configured to close on loss of power (or upon melting of fusible link).During start-up, it is important to verify proper operation, including makingsure that dampers close fully without binding. A binding damper jeopardizes theintegrity of the room seal and could prevent the clean agent from reaching the properconcentration Level to extinguish a fire. To increase reliability, mechanical systemsshould be designed to minimize the number of motorized fire/smoke dampers.

 Outdoor Air Smoke Detectors. Outsideair, and any other air supply, should be equipped with smoke detection thatshuts down associated fans and closes associated fire/smoke damper(s). Roof fires,nearby fires, nearby chemical spills, and even generator start up can introducesmoke and reactive chemicals into the data center through fresh air intakes.

Additional information on fire detection and suppression systems for datacomenvironments can be found in Design Considerations For Datacom Equipment Centers(ASHRAE 2005b).


 Commissioningof datacom facilities is critical to their proper functioning and reliableoperation. The commissioning process is usually misinterpreted as focusing onsystems testing only, and initiated during the construction process.

However, ASHRAE Guideline 0 defines commissioning as a "process focuse[d"]uponverifying and documenting that the facility and all of its systems andassemblies are planned, designed, installed, tested, operated, and maintainedto meet the owner's project requirements."

 Itis recommended that commissioning of datacom facilities begin at project inception,so that owner requirements can be better defined, addressed, and verifiedthroughout the entire design and construction process. Five levels of commissioningare described in Design Considerationsr Datacom Equipment Centers (ASHRAE 2005b):

·Level 1: factory acceptance tests

·Level 2: field component verification

·Level 3: system construction verification

·Level 4: site acceptance testing

·Level 5: integrated systems testing

Mission-critical facilities typically HAVC more demanding performancerequirements for responses to expected and unexpected anomalies without affectingcritical operations. Systems usually include redundant components and utilityfeeds, and excess capacity or back up systems or equipment that, during anemergency, can be automatically or manually activated. These redundant or back-upcomponents, systems, or groups of interrelated systems are tested during Level5 commissioning: this Level is what generally sets mission-critical facility commissioningapart from typical office building commissioning.

Further information on commissioning can be found in Chapter 43 andASHRAR Guidelines 0 and 1 .1.


 Thereare many serviceability issues to consider for HAVC equipment serving datacom facilities.Above all, the design should seek to coordinate with datacom facilityoperations to service and maintain equipment with the least amount ofdisruption to the day-to-day running of the facility. One approach is to locateall cooling equipment (e.g., ERAE or central station air-handling units)outside of the datacom equipment room in dedicated support rooms. Service andmaintenance operation for this equipment is then performed in areas devoted specificallyto air-conditioning equipment. System security for these spaces, however, mustbe addressed.

 HVACequipment saving datacom facilities can also be located on the roof, when the physicalarrangement of the facility and space limitations allow

Availabilityand Redundancy

 Itis extremely important to understand the need for uptime of the datacom facilities.Mission-critical datacom facilities, as their name implies, are often requiredto run 24 h, 7 days a week, all year round, and any disruption to thatoperation typically results in a loss of business continuity or revenue for theend user.

 Availability is a percentage valuerepresenting the probability that a component or system will remainoperational. Availability typically considers both component reliability andsystem maintenance (Beaty 2004). Values of 99.999% ("five 9s") andhigher are commonly referenced in datacom facility design, but are difficult toattain. For individual components, availability is often determined throughtesting. For assemblies and systems, availability is often the result of amathematical evaluation based on the availability of individual components andany redundancy or diversity that may be used.

Availability calculations for H VAE systems are seldom done and areextremely difficult, because published data on components and systems are notreadily available. Further researeh is needed to allow for calculation ofsystem availability as a function of component availability and level of redundancy.

System availability may be so vital that the potential cost of sys- temfailure justifies redundant systems, capacity, and/or components, as well as increaseddiversity. System simplicity and east of operation should be a constant consideration;a substantial percentage of reported data center failures are related to humanactivity or error.

 Themost common method used to attain greater system availability is addingredundant parallel components to avoid a single component failure that causes asystem wide failure. HVAC system redundancy calculations commonly use the terms N1,N2,and 2N to indicate how many additional components are to be provided.

N represents the number of pieces ofequipment that it takes to satisfy the normal load. Redundant equipment is necessaryto compensate for failures and allow maintenance to be performed without reducingthe remaining online capacity below normal.

 Inthe east of datacom facilities using ERAG units, if N1redundancy  is required, the number ofunits required to satisfy the normal N cooling load must first be determined.One additional unit would then be provided, to achieve N1redundancy .

 Intheory, redundancy can be achieved withN1(or more air-handling or ERAG units), but the dynamics of underfloor and over-headflow are such that loss of a specific unit can be critical for a specific area.EFD analysis is often performed to determine the effect of losing specificunits in critical-use areas. For large spaces, consider using N1for every X number of units, to provide one redundant unit for every set of Xunits required.

 Takecare to exercise redundant equipment frequently, to prevent conditions that enhancegrowth of mold and mildew in filters, insulated unit enclosures, and outsideair pathways where sports and food sources for microbial growth may accumulate.  Another approach is to keep redundant equipmentoperational at all times,but to usevariable-frequency drives (VFDs) to control fan speed. In this manna, the fansoperate at a speed to match loads. If a fan fails, the other fans ramp up tomaintain required airflows. No schedule is required for exercising equipment, becauseall equipment is operational (unless loads are so low that it is not practicalto operate all equipment).

Diversity. Systems that use an alternativepath for distribution are said to HAVC diversity. In an HVAC system, diversitymight be used to refer to an alternative chilled-water piping system. To be of maximumbenefit, both the normal and alternative paths must each be able to support theentire normal load. One company developed a tiered classification system torank the Level of diversity and redundancy in a data center design (Turner and Brill2003).

 Withdual feeds, it is often possible to perform planned datacom air-moving device infrastructureactivity without shutting down critical loads, a concept called concurrentlymaintainable. Fault-tolerant systems do not lost power or cooling to the datacomequipment when a single component fails.

 Measures to Increase Reliability. Practicalways to increase HVAC system reliability for datacom facility design may includeany of the examples listed below. A fault tree analysis or other methodologyshould be used for each facility to examine critical failure paths and necessarydesign measures to increase system availability.

·Back-uputilities: power generation, second electric service, water supply, etc. Emergencypower supplies probably should feed sonic aspects of HVAC systems as well as datacomequipment to allow for continuous equipment operation within allowable environmentalconditions.

·Back-up air moving equipment: air handlers, fans, computerroom units, etc.

·Back-upand/or cross-connected cooling equipment: chillers, pumps, cooling towers, dry coolers,cooling coils, makeup water supply, etc.

·Diverse piping systems: chilled water, condenser water, etc.

·Full or partial back-up of air-moving and/or cooling equipmenton emergency power.

·Back-up thermal storage: chilled water, ice, makeup water,etc.

Energy Conservation

Dramatic reductions in energy use can be achieved with conservationstrategies. Central-station air-conditioning systems using outside air for freecooling (where appropriate), variable-volume ventilation, and evaporative cooling/humidificationstrategies offer significant opportunities for reducing energy use, dependingon the frequency of favorable outdoor conditions. A dew-point control strategy,often consisting of positive pressurization of the datacom facility andhumidity control at the point of outdoor air intake, can eliminate the need forhumidity-sensing devices and provide precise humidity control

 Asignificant amount of wasted energy has been identified in many existing facilities,often because of fighting between adjacent air-conditioning units attempting tomaintain tight tolerances. Adopting the somewhat less stringent environmentaltolerances found in ASHRAE's (2008) Thermal Guidelines for Data ProcessingEnvironments should minimize this historically significant problem. Controlstrategies such as underfloor air temperature control and additional monitoringpoints should be considered to identify and avoid fighting, especially whereraised-floor spaces may include a mixture of heat densities in the same openarea.

Baseline energy consumption of office and telecommunications equipmentwas estimated by Roth (2002). Ease studies of energy consumption in datacom facilitiesHAVC been inventoried and are available for public review (LBNL 2003). Tschudiet al. (2003) summarized existing researeh on energy conservation in datacom facilitiesand provided a roadmap for further researeh in this area. Areas covered includemonitoring and control, electrical systems,andHVAC systems (including free cooling), and use of variable-speed compressors inERAE units. A set of recommendations for high-performance data centers has alsorecently been issued (RMI2003).

Power usage effectiveness (PUE) is a metricfor characterizing and reporting overall data center infrastructure efficiency,and is defined by the following formula:

When calculating PUE, IT energy consumptionshould be measured directly at the IT load. At a minimum, it can be measured atthe output of the UPS. However, the industry should progressively improve measurementcapabilities over time so that measurements of only the server loads become thecommon practice. Once a methodology for measuring PUE is defined for a particularfacility, it is important that it be calculated in the saint manner over the facility'slift. As such, PUE can be an effective tool in monitoring performance of the facility'sinfrastructure, but because IT energy for different facilities can each be measuredat different points, there is little value in using it to compare different facilities.For a dedicated data center, the total energy in the PUE equation includes all energysources at the point of utility handoff to the data center owner or operator.For a data center in a mixed-use building, the total energy is all energy requiredto operate the data center, similar to a dedicated data center, and should includecooling, lighting, and support infrastructure for the data center operations.

 Economizer Cycles. Thereare several available options for economizers in datacom facilities, each withbenefits and challenges specific to data centers. The following types of economizercategories are useful for discussion and evaluation:

Air side

 -Directexchange (bringing outdoor air directly into the facility)

 -Indirectexchange (heat exchangers do not introduce outdoor air into the facility)

Water side

 -Direct (condenser water can mix with chilledwater; cooling tower, dry, or wet coolers can be used)

 -Indirect (heat exchanger separates condenserwater and chilled-water loops; cooling towers, dry, or wet coolers can be used)

 -Drycoolers (glycol cools directly with an economizer coil in the CRAC)

Application of economizers in datacom facilities requires more careful reviewand consideration than a typical commercial application because of potentialdamage to data center equipment. In many instances, economizers offer one of thegreatest energy savings opportunities in datacom facilities. Careful review ofthe total cost of ownership (TEO) is encouraged, because it is possible that inselect casts the TEO will not be attractive. Items that may affect the financialperformance of an economizer include

.Ratchet clauses in the utility rate tariff

.Gas-phase filtration requirements

.Large humidification loads

.Low cost of electricity

.Largespace/capital needs for louvers, ductwork, and equipment associated with the economizers

 Thereare three major considerations when reviewing the potential number of hours of economizeroperation in data centers: geographic location, return air temperatures, andindoor temperature and humidity requirements. Note that ASHRAE (2008) also allowsincreased economizer use in some climates, because of the higher recommendedinlet air temperature range (18 to 27for Class 1 and 2 environments) relative to a typical office supply airtemperature of 16or below. In terms of potential energy savings, air-side economizers with directexchange of outdoor air often offer greater potential energy savings than economizerswith heat exchangers. Economizer processes using heat exchange have increasedinefficiencies because of the required temperature difference across the heat exchanger(heat wheel, cooling towers, or dry coolers). Although DX air-side economizersmay HAVC extra energy savings benefits, they are also accompanied by some potentialchallenges such as gaseous and particulate contamination, as well as humidificationissues. The advantage of the other three types of economizers is that they eliminateconcerns over these issues, because outdoor air is not directly introduced intothe building. However, beyond airborne contamination and humidity, all types ofeconomizers introduce additional complexities to the HVAC system operation and includepotential failure scenarios that must be understood before applying them to datacomfacilities to ensure that the mission is not compromised. When it is determinedthat economizers are an appropriate fit and/or a requirement for a specific datacomapplication, it is still necessary to determine which type is the mostappropriate. For datacom applications, this often requires a detailed evaluationwith many more variables than found in a typical comfort-cooling application.When choosing between air-side economizers with indirect or direct evaporative cooling,the evaluation should include outdoor air quality as well as first cost and energysavings. See Chapter 52 for information on evaporative cooling processes.



Codes, Standards, and Guidelines

ASHRAE. 2005. The commissioning process.Guideline 0-2005.

ASHRAE.2007. HVAC technical requirements for the commissioning process. Guideline 1.1-2007.

ASHRAE. 1992. Gravimetric and dust-spot proceduresfor testing air-cleaning devices used in general ventilation for removing particulatematter. Standard 52.1-1992. (Withdrawn)

ASHRAE.2007. Method of testing general ventilation air-cleaning devices for removalefficiency by particle size. ANSI/ASHRAE Standard 52.2- 2007

ASHRAE. 2010. Ventilation for acceptableindoor air quality. ANSI/

  ASHRAE Standard 62.1-2010.

ASHRAE. 2007. Method of testing for rating computerand data processing room unitary air conditioners. ANSI/ASHRAE Standard127-2007.

ASTM. 2009. Guide for heated system surface conditionsthat produce contact burn injuries. Standard E 1055-03 (2009). American Societyfor Testing and Materials, West Conshohocken,PA.

EIA. 2005. Cabinets, racks, panels, and associatedequipment. Standard

  EIA/EEA-310. Electronics Industries Alliance.

ICC. 2009. International building code'.International Code Council, Washington,D.E.

IEEE.2008. Recommended practice for installation design and implementation of ventedlead-acid batteries for stationary applications. IEEE Standard 484-2002(R2008). Institute of Electrical and Electronics Engineers, Piscataway, NJ.

IEEE. 2002. Recommended practice forinstallation design and installation of valve-regulated lead-acid storagebatteries for stationary applications. Standard 1187-2002. Institute of Electricaland Electronics Engineers, Piscataway, NJ.

NFPA. 2011. National electric code'.Standard 70. National Fire Protection Association, Quincy, MA.

NFPA. 2009. Standard for electrical safetyin the workplace. Standard 70E. National Fire Protection Association, Quincy, MA.

NFPA. 2009. Recommended practice for thefire protection of telecommunications facilities. Standard 76. National Fire ProtectionAssociation, Quincy, MA.

OSHA. 1996. 29 EFR 1910.95: Occupationalnoise exposure. U.S. Depart went of Labor, Occupational Safety and HealthAdministration, Office of Information, Washington, D.E.http://www.osha.gov/pls/oshaweb/owawdisp.show_doeument`?p_table=STANDARDS&p_id=9735.

Telcordia. 2001 Thermal management in telecommunicationscentral offices. Telcordia Techuologies Generie Requirements GR-3028-EORE.

Telcordia. 2006. Networkequipment  BuildingSystems (NEBS) require-

menu: Physical proteetion. Telcordia Technologies Generie Reguire-

menu, Issue 3, GR-63-EORE.

Other Puhlieatinns

ASHRAE. 2005a. Datacom equipment power trends and coolingappliea-


ASHRAE. 2005b. Design eon.rideration.rrdatacom equipment centers.

ASHRAE. 2006. Design eon.rideration.rrliquid cooling in datacom and

  teleeommunieation.r room.r.

ASHRAE. 2008. Thermal guidelinesrdata proeessing environments.

ASHRAE. 2009. Partieulate and ga.reour eontaminationin datacom envi-


Beaty, D.L. 2004. Reliability engineering ofdatacomcooling systems. Syne

  posium, ASHRAE Winter Meeting.

Europcan Eouneil. 2003. Direetive 2003/10/EEofthe Europcan Parliament

 andofthe Eouneil6February 2003 on the minimum health and.rafery

requirements regarding the exposureu}orkerr to the risks ari.ring

 omphysical agents (noi.re). Available from hnp://osha.europa.eu/en/


Herrlin, M.K. 1996. Eeonomie benefits ofenergy savings associated with:

 (1)Energy-effieient telecommunications equipment; and (2) Appropri-

 ateenvironmental control. Eighteenth International Telecommunications

  Energy Eonferenee INTELEE '96, Boston.

LBNL. 2003. hnp://datacenters.lbl.gov/EaseStudies.hhnl.Lawrenee Berke-

 leyNational Laboratories.

NIOSH. 1986. Eriteria for a recommendedstandard: Oeeupational exposure

 tohot environments (Revised Eriteria 1986). Report 86-113. National

  Institute for Oeeupational Safety and Health, Washington,D.E. Avail-

 ableat http://www.ede.gov/niosh/86-113.html.

RMI. 2003. Energy effieientdata centers: t1RoeMountainIn.rtitute design

 eharette.Roeky Mountain Institute, Snowmass, EO.

Roth, KF. Goldstein, and J. Kleinman. 2002.Energy eon.rumptionoff ee

 andteleeommunieation.r equipment in eommereial buildings. vol. I:

Energy eon.rumption ba.reline. Arthur D. Linle.

Tsehudi, B., T. Xu, D. Sartor, and J. Stein.2003. Roadmap for publie interest

researeh for high-performance data centers. LBNL Report 53483

Turner, W.P. and K. Brill.2003.Indu.rtry.rtandard tierelarr访eation.r define

 siteinartruetureperformance. The Uptime Institute, Santa  Fe, NM


ASHRAE. 2003. Ri.rkmanagementguidancerhealth, .raty and environ-

mental.reeurity under extraordinary ineidents.

Awbi, H.B. and G. Gan. 1994. Predietion ofairflow and thermal comfort in

   0mees.4SHR}l E .urnal 36(2):17-21

Bash, E.EE.O. Patel, and R.K. Sharma. 2003. Effieientthermal manage-

 wentof data centers-Immediate and long-term researeh needs. Inter-

national Journal ofHU4E&R Researeh (now HU4E&R Rer。二们9(2):


Beaty, D.L. 2003. Liquid cooling-Friend orfoe. Symposium, ASHRAE

  Winter Meeting.

Brill, KE. Orehowski, and L. Strong. 2002.Produetcertifieationrfault-

toleranee i.r er.rentialr ver访eationhigh availability. The Uptime

  Institute, Sante Fe, NM.

ETSI. 2009. Environmental conditions andenvironmental tests for teleeom

 munieationsequipment; Part 1-3: Elassifieation ofenvironmental eondi-

lions; stationary use at weatherproteetcd locations. Standard EN 300

 019-1-3  V2.3.2.  Europcan Telecommunications StandardsInstitute,

Sophia Antipolis, Franee.

Herrlin, M.K. 1997hepressurized telecommunications central offiee:

 IAQand energy eonsumption. Healthy Buildings/IAQ '97, Washington


ISO. 1999. Elcanrooms and associated controlledenvironments-Part 1:

   Elassifieationof air elcanliness. ISO/FDIS Standard 14644-1. Inter-

national Standards Organization, Geneva.

Kang, SR.R. Sehmidt, K.M. Kelkar, A.Radmehr, and S.V. Patankar. 2001

 Amethodology for the design of perforated tiles in raised floor data een-

 ternusing computational flow analysis, IEEE Tran.raetiou.r on Eompo-

ueutr arid Paekaging Teehuologier 24:177-183

Karki, K.EA. Radmehr, and S.V. Patankar. 2003.Use of eomputational

  fluid dynamics for ealeulating flow rates through perforated tiles in

raised-floor data centers. International.usualHE&R Researeh

  (now HU4E&R Researeh) 9(2):153-166.

Krzyzanowski, M.E. and B.T.Reagor. 1991. Measurement of potential eon-

taminants in data proeessing environments. 4SHR}lE Tran.raetion.r


Lentz, M.S. 1991. Adiabatie saturation andVAV: A preseription for eeon-

 omyand close environmental control.4SHR}lE Trau.raetiou.r 97(1):477-


Longberg, J.E 1991. Using a centralair-handling unit system for environ-

mental control of electronic data proeessing centers. 4SHR}lE Trau.r-


Nakao, MH. Hayama, and M. Nishioka. 1991. Whichcooling air supply

  system in better for a high heat density room: Underfloor or overhead`?

Thirteenth  International  Telecommunications  Energy Eonferenee

(INTELEE'91),November 1991

Noh, H.-K., K.S. Song, and S.K. Ehun. 1998.The cooling eharaetcristies on

 theair supply and return flow systems in the teleeommunieation eabinet

room. Twentieth International Telecommunications Energy Eonferenee

  (INTELEE '98).

Patankar, S.V. and K. Karki. 2004.Distribution of cooling airflow in a

  raised-floor datacenter.4SHR}lETrari.raetiori.r 110(2):624-635.

Patel, E.DR. Sharma, E.E. Bash, and A.Beitelmal. 2002. Thermal eonsid-

erations in cooling large seale high eompute data centers. ITHERM

2002: Eighth Intersoeiety Eonferenee on Thermal and Thermomeehani-

 ealPhenomena in Electronic Systems.

Patel, E.DE.E. Bash, E. Belady, L. Stahl, andD. Sullivan. 2001. Eompu-

tational fluids dynamics modeling of high eompute density data centers

 toassure system air inlet specifications. Paper IPAEK200115622,

  InterPaek 'O1 Kauai.

Sehmidt, R. and E. Eruz. 2002. Raised floor eomputerdata center: effect on

 rackinlet temperatures of chilled air exiting both the hot and cold aisles.

 IEEE2002 biter Soeiety Eoro户,二,iee on  Thermal Phenomena, pp.


Sehmidt, R. 1997. Thermal management of offieedata proeessing centers.

  InterPaek '97, Hawaii.

Sehmidt, R. 2001. Effect of data center eharaetcristieson data proeessing

equipment inlet temperatures. Paper IPAEK200115870.InterPaek'O1,


Sehmidt, R.RK.E. Karki, K.M. Kelkar, A. Radmehr,and S.V. Patankar.

2001. Measurements and predietions of the flow distribution through

perforated tiles in raised-floor data centers. Paper IPAEK200115728.

  InterPaek 'O1, Kauai.

Stahl, L. and E. Belady. 2001. Designing analternative to eonventional room

 cooling,  International Telecommunications and  Energy Eonferenee

(INTELEE), Edinburgh.

TIA. 2003. Telecommunications infrastructurestandard for data centers,

draft 2.0. Standard PN-3-0092. Teleeommunieation Industry Assoeia-

lion, Arlington, VA.

U.S.GSA. 1992. Airborne partieulate elcanliness elasses in clean rooms and

   cleanzones. U.S.Federal Standard FS209E. General Services Adminis-

tration, Washington,D.E.

Wesehler, E.J. and H.E. Shields. 1991. Theimpaet of ventilation and indoor

 airquality on electronic equipment. 4SHR}lE Tran.raetion.r 97(1):455-


Yamamoto, M. and T. Abe. 1994. The newenergy-saving way aehieved by

 changingeomputer eulture (saving energy by changing the eomputer

  room environment)?EE Tran.raetion.r on Power Sytrtemr9(August).