Difference between revisions of "Data center"

From LIMSWiki
Jump to navigationJump to search
(Added stub record. Cleaning up content to meet LIMSwiki standards.)
 
m (Added image.)
 
(10 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[File:Intel Team Inside Facebook Data Center.jpg|thumb|right|400px|A team from Intel reviews the inner workings of Facebook's first built-from-scratch data center in 2013.]]
A '''data center''' is a facility used to house computer systems and associated components such as telecommunications and storage systems. A data center generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g. air conditioning and fire suppression), and various security devices. Data centers vary in size<ref name="CCDA640-864">{{cite book |url=http://books.google.com/books?id=cuOoV5u3WCUC&pg=PA130 |title=Official Cert Guide: CCDA 640-864 |author=Bruno, Anthony; Jordan, Steve |publisher=Cisco Press |edition=4th |year=2011 |page=130 |isbn=9780132372145 |accessdate=26 August 2014}}</ref>, with some large data centers capable of using as much electricity as a "medium-size town."<ref name="GlanzData">{{cite web |url=http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html |title=Power, Pollution and the Internet |author=Glanz, James |work=The New York Times |date=22 September 2012 |accessdate=26 August 2014}}</ref>  
A '''data center''' is a facility used to house computer systems and associated components such as telecommunications and storage systems. A data center generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g. air conditioning and fire suppression), and various security devices. Data centers vary in size<ref name="CCDA640-864">{{cite book |url=http://books.google.com/books?id=cuOoV5u3WCUC&pg=PA130 |title=Official Cert Guide: CCDA 640-864 |author=Bruno, Anthony; Jordan, Steve |publisher=Cisco Press |edition=4th |year=2011 |page=130 |isbn=9780132372145 |accessdate=26 August 2014}}</ref>, with some large data centers capable of using as much electricity as a "medium-size town."<ref name="GlanzData">{{cite web |url=http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html |title=Power, Pollution and the Internet |author=Glanz, James |work=The New York Times |date=22 September 2012 |accessdate=26 August 2014}}</ref>  
The main purpose of many data centers is for running applications that handle the core business and operational data of one or more organizations. Often those applications will be composed of multiple hosts, each running a single component. Common components of such applications include databases, file servers, application servers, and middleware. Such systems may be proprietary and developed internally by the organization or bought from enterprise software vendors. However, a data center may also solely be concerned with operations architecture or other services.
Data centers are also used for off-site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with backup tapes. Backups can be taken off servers locally on to tapes. However, tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center.<ref name="GeerBackup">{{cite web |url=http://www.informationweek.com/4-steps-for-secure-tape-backups/d/d-id/1105506? |title=4 Steps For Secure Tape Backups |author=Geer, David |work=InformationWeek |publisher=UBM Tech |date=25 July 2012 |accessdate=27 August 2014}}</ref> Encrypted backups can be sent over the Internet to another data center where they can be stored securely.


==History==
==History==
Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). Also, a single mainframe required a great deal of power and had to be cooled to avoid overheating. Security was important; computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.
Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). Also, a single mainframe required a great deal of power and had to be cooled to avoid overheating. Security was important; computers were expensive, and were often used for military purposes.<ref name="BartelsDCE">{{cite web |url=http://www.rackspace.com/blog/datacenter-evolution-1960-to-2000/ |title=[INFOGRAPHIC] Data Center Evolution: 1960 to 2000 |author=Bartels, Angela |work=The Rackspace Blog! & Newsroom |publisher=Rackspace, US Inc |date=31 August 2011 |accessdate=26 August 2014}}</ref>
 
During the boom of the microcomputer industry, especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources. With the advent of Linux and the subsequent proliferation of freely available Unix-compatible PC operating systems during the 1990s, as well as MS-DOS finally giving way to a multi-tasking capable Windows operating system, personal computers started to find their places in the old computer rooms. These were called "servers" as timesharing operating systems like Unix rely heavily on the client-server model to facilitate sharing unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time.


The boom of data centers came during the dot-com bubble. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward private data centers and were adopted largely because of their practical results.
During the boom of the microcomputer industry, especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources.<ref name="MurrayDC81">{{cite web |url=http://books.google.com/books?id=1REkdf3I86oC&pg=PA35 |title=Data Center Problems: You're Not Alone |author=Murray, John P. |work=Computerworld |date=26 October 1981 |accessdate=26 August 2014}}</ref> With the advent of Linux and the subsequent proliferation of freely available Unix-compatible PC operating systems during the 1990s, as well as MS-DOS finally giving way to a multi-tasking capable Windows operating system, personal computers started to replace the older systems found in computer rooms.<ref name="BartelsDCE" /><ref name="BartonACM">{{cite web |url=http://queue.acm.org/detail.cfm?id=945076 |title=From Server Room to Living Room |author=Barton, Jim |work=Queue |publisher=Association for Computing Machinery |date=01 October 2003 |accessdate=26 August 2014}}</ref> These were called "servers," as time sharing operating systems like Unix rely heavily on the client-server model to facilitate sharing unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition.<ref name="BlandingDCM">{{cite web |url=http://books.google.com/books?id=XrEwl7VWXXMC&pg=PA75 |chapter=Chapter 7: Reverting to Centralized Data Center Management |title=Enterprise Operations Management Handbook |author=Axelrod, C. Warren; Blanding, Steve (ed.) |publisher=CRC Press |edition=2nd |year=1998 |pages=75–84 |isbn=9781420052169 |accessdate=26 August 2014}}</ref>


With an increase in the uptake of [[cloud computing]], business and government organizations have been scrutinizing data centers to a higher degree in areas such as security, availability, environmental impact, and adherence to standards.  
The dot-com boom of the late '90s and early '00s brought about significant investment into what would be called Internet data centers (IDCs). As these grew in size, new technologies and practices were designed to better handle the scale and operational requirements of these facilities. These practices eventually migrated toward private data centers and were adopted largely because of their practical results.<ref name="KatzTech">{{cite web |url=http://spectrum.ieee.org/green-tech/buildings/tech-titans-building-boom |title=Tech Titans Building Boom |author=Katz, Randy H |work=IEEE Spectrum |publisher=IEEE |date=01 February 2009 |accessdate=26 August 2014}}</ref><ref name="BartelsDCE" /> As [[cloud computing]] became more prominent in the 2000s, business and government organizations scrutinized data centers to a higher degree in areas such as security, availability, environmental impact, and adherence to standards.<ref name="KatzTech" />


==Standards and design guidelines==
==Standards and design guidelines==
[[File:Datacenter-telecom.jpg|thumb|left|Racks of telecommunications equipment in part of a data center]]
[[File:Datacenter-telecom.jpg|thumb|left|240px|Racks of telecommunications equipment in part of a data center]]
IT operations are a crucial aspect of most organizations' business continuity, often relying on their information systems to run business operations. Organizations thus depend on reliable infrastructure for IT operations in order to minimize any chance of disruption caused by power failure and/or security breach. That reliable infrastructure is normally built on a sound set of widely accepted standards and design guidelines.  
IT operations are a crucial aspect of most organizations' business continuity, often relying on their information systems to run business operations. Organizations thus depend on reliable infrastructure for IT operations in order to minimize any chance of disruption caused by power failure and/or security breach. That reliable infrastructure is normally built on a sound set of widely accepted standards and design guidelines.  


The Telecommunications Industry Association's (TIA's) ''Telecommunications Infrastructure Standard for Data Centers'' ([[ANSI]]/TIA-942) is one such example, specifying "the minimum requirements for the telecommunications infrastructure" of organizations large and small, from large "multi-tenant Internet hosting data centers" to smaller "single-tenant enterprise data centers."<ref name="TIA-942">{{cite web |url=https://global.ihs.com/doc_detail.cfm?&rid=TIA&input_doc_number=TIA-942&item_s_key=00414811&item_key_date=860905&input_doc_number=TIA-942&input_doc_title=#abstract |title=TIA-942: Telecommunications Infrastructure Standard for Data Centers |publisher=Telecommunications Industry Association |date=01 March 2014 |accessdate=26 August 2014}}</ref> Up until early 2014, TIA also ranked data centers from Tier 1, essentially a server room, to Tier 4, which hosts mission-critical computer systems with fully redundant subsystems and compartmentalized security zones controlled by biometric access controls methods.<ref name="TIA-942">
The Telecommunications Industry Association's (TIA's) ''Telecommunications Infrastructure Standard for Data Centers'' ([[ANSI]]/TIA-942) is one such example, specifying "the minimum requirements for the telecommunications infrastructure" of organizations large and small, from large "multi-tenant Internet hosting data centers" to smaller "single-tenant enterprise data centers."<ref name="TIA-942">{{cite web |url=https://global.ihs.com/doc_detail.cfm?&rid=TIA&input_doc_number=TIA-942&item_s_key=00414811&item_key_date=860905&input_doc_number=TIA-942&input_doc_title=#abstract |title=TIA-942: Telecommunications Infrastructure Standard for Data Centers |publisher=Telecommunications Industry Association |date=01 March 2014 |accessdate=26 August 2014}}</ref> Up until early 2014, TIA also ranked data centers from Tier 1, essentially a server room, to Tier 4, which hosts mission-critical computer systems with fully redundant subsystems and compartmentalized security zones controlled by biometric access controls methods.<ref name="TIA-942" />


The Uptime Institute, "an unbiased, third-party data center research, education, and consulting organization,"<ref name="UIAbout">{{cite web |url=http://uptimeinstitute.com/about-us |title=About Uptime Institute |publisher=451 Group, Inc |accessdate=26 August 2014}}</ref> also has a four-tier standard (described in their document ''Tier Standard: Operational Sustainability'') that describes the availability of data from the hardware at a data center, with the higher tiers offering greater availability.<ref name="UIDocs">{{cite web |url=http://uptimeinstitute.com/publications |title=Uptime Institute Publications |publisher=451 Group, Inc |accessdate=26 August 2014}}</ref> The institute, however, at times voiced discontent with TIA's use of tiers in its standard, and in March 2014 TIA announced it would remove the word "tier" from its ANSI/TIA-942 standard.<ref name="TIAUITiers">{{cite web |url=http://www.cablinginstall.com/articles/2014/03/tia-942-tiers.html |title=TIA to remove the word ‘Tier’ from its 942 Data Center standards |work=Cabling Installation & Maintenance |publisher=PennWell Corporation |date=18 March 2014 |accessdate=26 August 2014}}</ref>
The Uptime Institute, "an unbiased, third-party data center research, education, and consulting organization,"<ref name="UIAbout">{{cite web |url=http://uptimeinstitute.com/about-us |title=About Uptime Institute |publisher=451 Group, Inc |accessdate=26 August 2014}}</ref> also has a four-tier standard (described in their document ''Tier Standard: Operational Sustainability'') that describes the availability of data from the hardware at a data center, with the higher tiers offering greater availability.<ref name="UIDocs">{{cite web |url=http://uptimeinstitute.com/publications |title=Uptime Institute Publications |publisher=451 Group, Inc |accessdate=26 August 2014}}</ref> The institute, however, at times voiced discontent with TIA's use of tiers in its standard, and in March 2014 TIA announced it would remove the word "tier" from its ANSI/TIA-942 standard.<ref name="TIAUITiers">{{cite web |url=http://www.cablinginstall.com/articles/2014/03/tia-942-tiers.html |title=TIA to remove the word ‘Tier’ from its 942 Data Center standards |work=Cabling Installation & Maintenance |publisher=PennWell Corporation |date=18 March 2014 |accessdate=26 August 2014}}</ref>
Line 33: Line 36:


==Design considerations==
==Design considerations==
[[File:Rack001.jpg|thumb|right|A typical server rack, commonly seen in [[colocation center|colocation]]]]
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19-inch rack cabinets, which are usually placed in single rows forming corridors or aisles between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from rack units to large freestanding storage silos which occupy many square feet of floor space. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves and are placed alongside them. Local building codes may govern minimum ceiling heights.
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in [[19 inch rack]] cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from [[Rack unit|1U server]]s to large freestanding storage silos which occupy many square feet of floor space. Some equipment such as [[mainframe computer]]s and [[computer storage|storage]] devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use [[intermodal container|shipping container]]s packed with 1,000 or more servers each;<ref>{{cite web|url=http://www.youtube.com/watch?v=zRwPSFpLX8I|title=Google Container Datacenter Tour (video)}}</ref> when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).<ref>{{cite web| title=Walking the talk: Microsoft builds first major container-based data center| url=http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9075519| archiveurl=http://web.archive.org/web/20080612193106/http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9075519| archivedate=2008-06-12| accessdate=2008-09-22}}</ref>
 
Local building codes may govern the minimum ceiling heights.
 
===Design programming===
Design programming, also known as architectural programming, is the process of researching and making decisions to identify the scope of a design project.<ref>Cherry, Edith. “Architectural Programming: Introduction”, Whole Building Design Guide, Sept. 2, 2009</ref> Other than the architecture of the building itself there are three elements to design programming for data centers: facility topology design (space planning), engineering infrastructure design (mechanical systems such as cooling and electrical systems including power) and technology infrastructure design (cable plant). Each will be influenced by performance assessments and modelling to identify gaps pertaining to the owner’s performance wishes of the facility over time.
 
Various vendors who provide data center design services define the steps of data center design slightly differently, but all address the same basic aspects as given below.
 
===Modeling criteria===
Modeling criteria are used to develop future-state scenarios for space, power, cooling, and costs.<ref>Mullins, Robert. “Romonet Offers Predictive Modelling Tool For Data Center Planning”, Network Computing, June 29, 2011 [http://www.networkcomputing.com/data-center/231000669]</ref> The aim is to create a master plan with parameters such as number, size, location, topology, IT floor system layouts, and power and cooling technology and configurations.
 
===Design recommendations===
Design recommendations/plans generally follow the modelling criteria phase. The optimal technology infrastructure is identified and planning criteria are developed, such as critical power capacities, overall data center power requirements using an agreed upon PUE (power utilization efficiency), mechanical cooling capacities, kilowatts per cabinet, raised floor space, and the resiliency level for the facility.
 
===Conceptual design===
Conceptual designs embody the design recommendations or plans and should take into account “what-if” scenarios to ensure all operational outcomes are met in order to future-proof the facility. Conceptual floor layouts should be driven by IT performance requirements as well as lifecycle costs associated with IT demand, energy efficiency, cost efficiency and availability. Future-proofing will also include expansion capabilities, often provided in modern data centers through modularity.
 
===Detail design===
Detail design is undertaken once the appropriate conceptual design is determined, typically including a proof of concept. The detail design phase should include the development of facility schematics and construction documents as well as schematic of technology infrastructure, detailed IT infrastructure design and IT infrastructure documentation.
 
===Mechanical engineering infrastructure design===
[[File:CRAC Cabinets 2.jpg|thumb|CRAC Air Handler]]
Mechanical engineering infrastructure design addresses mechanical systems involved in maintaining the interior environment of a data center, such as heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization; and so on.<ref name="nxtbook.com">Jew, Jonathan. “BICSI Data Center Standard: A Resource for Today’s Data Center Operators and Designers,” BICSI News Magazine, May/June 2010, page 28. [http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26]</ref>
This stage of the design process should be aimed at saving space and costs, while ensuring business and reliability objectives are met as well as achieving PUE and green requirements.<ref>Data Center Energy Management: Best Practices Checklist: Mechanical, Lawrence Berkeley National Laboratory [http://hightech.lbl.gov/dctraining/strategies/mam.html]</ref> Modern designs include modularizing and scaling IT loads, and making sure capital spending on the building construction is optimized.
 
===Electrical engineering infrastructure design===
Electrical Engineering infrastructure design is focused on designing electrical configurations that accommodate various reliability requirements and data center sizes. Aspects may include utility service planning; distribution, switching and bypass from power sources; uninterruptable power source (UPS) systems; and more.<ref name="nxtbook.com"/>
 
These designs should dovetail to energy standards and best practices while also meeting business objectives. Electrical configurations should be optimized and operationally compatible with the data center user’s capabilities. Modern electrical design is modular and scalable,<ref>Clark, Jeff. “Hedging Your Data Center Power”, The Data Center Journal, Oct. 5, 2011. [http://www.datacenterjournal.com/design/hedging-your-data-center-power/]</ref> and is available for low and medium voltage requirements as well as DC (direct current).
 
===Technology infrastructure design===
[[File:Under Floor Cable Runs Tee.jpg|thumb|Under Floor Cable Runs]]
Technology infrastructure design addresses the telecommunications cabling systems that run throughout data centers. There are cabling systems for all data center environments, including horizontal cabling, voice, modem, and facsimile telecommunications services, premises switching equipment, computer and telecommunications management connections, keyboard/video/mouse connections and data communications.<ref>Jew, Jonathan. “BICSI Data Center Standard: A Resource for Today’s Data Center Operators and Designers,” BICSI News Magazine, May/June 2010, page 30. [http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26]</ref> Wide area, local area, and storage area networks should link with other building signaling systems (e.g. fire, security, power, HVAC, EMS).
 
===Availability expectations===
The higher the availability needs of a data center, the higher the capital and operational costs of building and managing it. Business needs should dictate the level of availability required and should be evaluated based on characterization of the criticality of IT systems estimated cost analyses from modeled scenarios. In other words, how can an appropriate level of availability best be met by design criteria to avoid financial and operational risks as a result of downtime?
If the estimated cost of downtime within a specified time unit exceeds the amortized capital costs and operational expenses, a higher level of availability should be factored into the data center design. If the cost of avoiding downtime greatly exceeds the cost of downtime itself, a lower level of availability should be factored into the design.<ref>Clark, Jeffrey. “The Price of Data Center Availability—How much availability do you need?”, Oct. 12, 2011, The Data Center Journal [http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability]</ref>
 
===Site selection===
Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines and emergency services can affect costs, risk, security and other factors to be taken into consideration for data center design. Location affects data center design also because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling.<ref>Tucci, Linda. “Five tips on selecting a data center location”, May 7, 2008, SearchCIO.com [http://searchcio.techtarget.com/news/1312614/Five-tips-on-selecting-a-data-center-location]</ref> For example, the topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate.
 
===Modularity and flexibility===
[[File:Cabinet Asile.jpg|thumb|Cabinet Aisle in a Data Center]]
 
Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.<ref>Niles, Susan. “Standardization and Modularity in Data Center Physical Infrastructure,” 2011, Schneider Electric, page 4. [http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf]</ref>
 
A modular data center may consist of data center equipment contained within shipping containers or similar portable containers.<ref>Pitchaikani, Bala. “Strategies for the Containerized Data Center,” DataCenterKnowledge.com, Sept. 8, 2011. [http://www.datacenterknowledge.com/archives/2011/09/08/strategies-for-the-containerized-data-center/]</ref> But it can also be described as a design style in which components of the data center are prefabricated and standardized so that they can be constructed, moved or added to quickly as needs change.<ref>Niccolai, James. “HP says prefab data center cuts costs in half,” InfoWorld, July 27, 2010. [http://www.infoworld.com/d/green-it/hp-says-prefab-data-center-cuts-costs-in-half-837?page=0,0]</ref>
 
===Environmental control===
The physical environment of a data center is rigorously controlled.
[[Air conditioning]] is used to control the temperature and humidity in the data center. [[ASHRAE]]'s "Thermal Guidelines for Data Processing Environments"<ref>{{cite book|title=Thermal Guidelines for Data Processing Environments|year=2012|publisher=American Society of Heating, Refrigerating and Air-Conditioning Engineers|isbn=978-1936504-33-6|author=ASHRAE Technical Committee 9.9, Mission Critical Facilities, Technology Spaces and Electronic Equipment|edition=3}}</ref> recommends a temperature range of {{convert|18|–|27|C|F}}, a dew point range of {{convert|5|–|15|C|F}}, and a maximum relative humidity of 60% for data center environments.<ref>{{cite web|url=http://www.serverscheck.com/blog/2008/07/why-monitor-humidity-in-computer-rooms.html|title=ServersCheck's Blog on Why Humidity Monitoring|date=July 1, 2008}}</ref>  The temperature in a data center will naturally rise because the electrical power used heats the air. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control [[humidity]] by cooling the return space air below the [[dew point]]. Too much humidity, and water may begin to [[condensation|condense]] on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in [[electrostatics|static electricity]] discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.
 
Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. At least one data center (located in [[Upstate New York]]) will cool servers using outside air during the winter. They do not use chillers/air conditioners, which creates potential energy savings in the millions.<ref>{{cite news| url=http://www.reuters.com/article/pressRelease/idUS141369+14-Sep-2009+PRN20090914 | work=Reuters | title=tw telecom and NYSERDA Announce Co-location Expansion | date=2009-09-14}}</ref>
 
Telcordia [http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=GR-2930& GR-2930, ''NEBS: Raised Floor Generic Requirements for Network and Data Centers''], presents generic engineering requirements for raised floors that fall within the strict NEBS guidelines.
 
There are many types of commercially available floors that offer a wide range of structural strength and loading capabilities, depending on component construction and the materials used. The general types of raised floors include stringerless, stringered, and structural platforms, all of which are discussed in detail in GR-2930 and summarized below.
 
*'''''Stringerless raised floors''''' - One non-earthquake type of raised floor generally consists of an array of pedestals that provide the necessary height for routing cables and also serve to support each corner of the floor panels. With this type of floor, there may or may not be provisioning to mechanically fasten the floor panels to the pedestals. This stringerless type of system (having no mechanical attachments between the pedestal heads) provides maximum accessibility to the space under the floor. However, stringerless floors are significantly weaker than stringered raised floors in supporting lateral loads and are not recommended.
 
*'''''Stringered raised floors''''' - This type of raised floor generally consists of a vertical array of steel pedestal assemblies (each assembly is made up of a steel base plate, tubular upright, and a head) uniformly spaced on two-foot centers and mechanically fastened to the concrete floor. The steel pedestal head has a stud that is inserted into the pedestal upright and the overall height is adjustable with a leveling nut on the welded stud of the pedestal head.
 
*'''''Structural platforms''''' - One type of structural platform consists of members constructed of steel angles or channels that are welded or bolted together to form an integrated platform for supporting equipment. This design permits equipment to be fastened directly to the platform without the need for toggle bars or supplemental bracing. Structural platforms may or may not contain panels or stringers.
 
Data centers typically have [[raised floor]]ing made up of {{convert|60|cm|ft|abbr=on|0}} removable square tiles. The trend is towards {{convert|80|-|100|cm|in|abbr=on}} void to cater for better and uniform air distribution. These provide a [[plenum space|plenum]] for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling.
 
====Metal whiskers====
Raised floors and other metal structures such as cable trays and ventilation ducts have caused many problems with [[zinc whiskers]] in the past, and likely are still present in many data centers. This happens when microscopic metallic filaments form on metals such as zinc or tin that protect many metal structures and electronic components from corrosion. Maintenance on a raised floor or installing of cable etc. can dislodge the whiskers, which enter the airflow and may short circuit server components or power supplies, sometimes through a high current metal vapor [[plasma arc]]. This phenomenon is not unique to data centers, and has also caused catastrophic failures of satellites and military hardware.<ref>{{cite web|title=NASA - metal whiskers research|url=http://nepp.nasa.gov/whisker/other_whisker/index.htm|publisher=NASA|accessdate=1 August 2011}}</ref>
 
===Electrical power===
 
[[File:Datacenter Backup Batteries.jpg|thumb|right|A bank of batteries in a large data center, used to provide power until diesel generators can start]]
 
Backup power consists of one or more [[uninterruptible power supply|uninterruptible power supplies]], battery banks, and/or [[Diesel generator|diesel]] / [[gas turbine]] generators.<ref>Detailed explanation of UPS topologies {{cite web|url=http://www.emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf|format=PDF|title=EVALUATING THE ECONOMIC IMPACT OF UPS TECHNOLOGY}}</ref>
 
To prevent [[single point of failure|single points of failure]], all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve [[N+1 redundancy]] in the systems. [[Transfer switch#Static transfer switch|Static transfer switches]] are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
 
===Low-voltage cable routing===
Data cabling is typically routed through overhead [[cable tray]]s in modern data centers. But some vendors like --- are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a [[hot aisle]] arrangement to maximize airflow efficiency.
 
===Fire protection===
[[File:FM200 Three.jpg|thumb|FM200 Fire Suppression Tanks]]
Data centers feature [[fire protection]] systems, including [[passive fire protection|passive]] and [[active fire protection|active]] design elements, as well as implementation of [[fire prevention]] programs in operations. [[Smoke detectors]] are usually installed to provide early warning of a fire at its incipient stage. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. An [[active fire protection]] system, such as a [[fire sprinkler system]] or a [[clean agent]] fire suppression gaseous system, is often provided to control a full scale fire if it develops. High sensitivity smoke detectors, such as [[Aspirating smoke detector]]s, activating [[clean agent]] fire suppression gaseous systems activate earlier than fire sprinklers.
Sprinklers = Structure Protection & Building life Safety.
Clean Agents = Business continuity & Asset Protection.
No water = No collateral damage or clean up.
Passive fire protection elements include the installation of [[Firewall (construction)|fire walls]] around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems. Fire wall penetrations into the server room, such as cable penetrations, coolant line penetrations and air ducts, must be provided with fire rated penetration assemblies, such as [[fire stop]]ping.
 
===Security===
Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including [[bollard]]s and [[mantrap (access control)|mantrap]]s.<ref>{{cite web|author=Sarah D. Scalet |url=http://www.csoonline.com/article/220665 |title=19 Ways to Build Physical Security Into a Data Center |publisher=Csoonline.com |date=2005-11-01 |accessdate=2013-08-30}}</ref> [[Video camera]] surveillance and permanent [[security guard]]s are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition [[mantrap]]s is starting to be commonplace.
 
==Energy use==
[[File:Google Data Center, The Dalles.jpg|thumb|right|Google Data Center, [[The Dalles, Oregon]]]]
{{main|IT energy management}}
 
Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.<ref>{{cite web|url=http://www1.eere.energy.gov/femp/program/dc_energy_consumption.html|title=Data Center Energy Consumption Trends|publisher=U.S. Department of Energy|accessdate=2010-06-10}}</ref> For higher power density facilities, electricity costs are a dominant [[operating expense]] and account for over 10% of the [[total cost of ownership]] (TCO) of a data center.<ref>J Koomey, C. Belady, M. Patterson, A. Santos, K.D. Lange. [http://www.intel.com/assets/pdf/general/servertrendsreleasecomplete-v25.pdf Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers] Released on the web August 17th, 2009.</ref> By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment.<ref>{{cite web|url=http://www1.eere.energy.gov/femp/pdfs/data_center_qsguide.pdf|title=Quick Start Guide to Increase Data Center Energy Efficiency|publisher=U.S. Department of Energy|accessdate=2010-06-10}}</ref>
 
===Greenhouse gas emissions===
 
and sometimes are a significant source of air pollution in the form of diesel exhaust.
 
<ref name=NYT92312>{{cite news|title=Data Barns in a Farm Town, Gobbling Power and Flexing Muscle|url=http://www.nytimes.com/2012/09/24/technology/data-centers-in-rural-washington-state-gobble-power.html|accessdate=September 25, 2012|newspaper=The New York Times|date=September 23, 2012|author=James Glanz}}</ref>
 
Capabilities exist to install modern retrofit devices on older diesel generators, including those found in data centers, to reduce emissions.
 
<ref>{{Cite web|url=http://www.meca.org/galleries/files/Stationary_Engine_Diesel_Retrofit_Case_Studies_1109final.pdf|title=CASE STUDIES OF STATIONARY RECIPROCATING DIESEL ENGINE RETROFIT PROJECTS|date=November 2009|work=Manufacturers of Emission Controls Association|accessdate=August 5, 2014}}</ref>
 
Additionally, engines manufactured in the U.S. beginning in 2014 must meet strict emissions reduction requirements according to the U.S. Environmental Protection Agency's [http://tier4answers.com/tier-4-answers "Tier 4"] regulations for off-road uses including those found in diesel generators. These regulations require near zero levels of emissions.
In 2007 the entire [[information and communication technologies]] or ICT sector was estimated to be responsible for roughly 2% of global [[Greenhouse gas|carbon emissions]] with data centers accounting for 14% of the ICT footprint.<ref name="smart1">{{cite web|url=http://www.smart2020.org/_assets/files/03_Smart2020Report_lo_res.pdf|title=Smart 2020: Enabling the low carbon economy in the information age|publisher=The Climate Group for the Global e-Sustainability Initiative|accessdate=2008-05-11}}</ref> The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption,<ref name="energystar1">{{cite web|url=http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf|title=Report to Congress on Server and Data Center Energy Efficiency|publisher=U.S. Environmental Protection Agency ENERGY STAR Program}}</ref> or roughly .5% of US GHG emissions,<ref>A calculation of data center electricity burden cited in the [http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf Report to Congress on Server and Data Center Energy Efficiency] and electricity generation contributions to green house gas emissions published by the EPA in the [http://epa.gov/climatechange/emissions/downloads10/US-GHG-Inventory-2010_ExecutiveSummary.pdf Greenhouse Gas Emissions Inventory Report]. Retrieved 2010-06-08.</ref>  for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020.<ref name="smart1"/>
 
Siting is one of the factors that affect the energy consumption and environmental effects of a datacenter. In areas where climate favors cooling and lots of renewable electricity is available the environmental effects will be more moderate. Thus countries with favorable conditions, such as: Canada,<ref>[http://www.theglobeandmail.com/report-on-business/canada-called-prime-real-estate-for-massive-data-computers/article2071677/ Canada Called Prime Real Estate for Massive Data Computers - Globe & Mail] Retrieved June 29, 2011.</ref> Finland,<ref>[http://www.fincloud.freehostingcloud.com/ Finland - First Choice for Siting Your Cloud Computing Data Center.]. Retrieved 4 August 2010.</ref> Sweden<ref>[http://www.stockholmbusinessregion.se/templates/page____41724.aspx?epslanguage=EN Stockholm sets sights on data center customers.] Accessed 4 August 2010.</ref> and Switzerland,<ref>[http://www.greenbiz.com/news/2010/06/30/swiss-carbon-neutral-servers-hit-cloud Swiss Carbon-Neutral Servers Hit the Cloud.]. Retrieved 4 August 2010.</ref> are trying to attract cloud computing data centers.
 
In an 18-month investigation by scholars at Rice University’s Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by 2020.
<ref>{{Cite news
| author = Katrice R. Jalbuena
| title = Green business news.
| quote = 
| publisher = EcoSeed
| date = October 15, 2010
| pages =
| url = http://ecoseed.org/en/business-article-list/article/1-business/8219-i-t-industry-risks-output-cut-in-low-carbon-economy
| accessdate = November 11, 2010
}}</ref>
 
===Energy efficiency===
The most commonly used metric to determine the energy efficiency of a data center is [[power usage effectiveness]], or PUE. This simple ratio is the total power entering the data center divided by the power used by the IT equipment.
 
Power used by support equipment, often referred to as overhead load, mainly consists of cooling systems, power delivery, and other facility infrastructure like lighting. The average data center in the US has a PUE of 2.0,<ref name="energystar1"/> meaning that the facility uses one watt of overhead power for every watt delivered to IT equipment. State-of-the-art data center energy efficiency is estimated to be roughly 1.2.<ref>{{cite web|url=https://microsite.accenture.com/svlgreport/Documents/pdf/SVLG_Report.pdf|title=Data Center Energy Forecast|publisher=Silicon Valley Leadership Group}}</ref> Some large data center operators like [[Microsoft]] and [[Yahoo!]] have published projections of PUE for facilities in development; [[Google]] publishes quarterly actual efficiency performance from data centers in operation.<ref>{{cite web|url=http://www.datacenterknowledge.com/archives/2009/10/15/google-efficiency-update-pue-of-1-22/|title=Google Efficiency Update|publisher=Data Center Knowledge|accessdate=2010-06-08}}</ref>
 
The [[U.S. Environmental Protection Agency]] has an [[Energy Star]] rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities.<ref>Commentary on introduction of Energy Star for Data Centers {{cite web|title=Introducing EPA ENERGY STAR for Data Centers|url=http://www.emerson.com/edc/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx|format=Web site|publisher=Jack Pouchet|accessdate=2010-09-27|date=2010-09-27}}</ref>
 
European Union also has a similar initiative: EU Code of Conduct for Data Centres<ref>{{cite web|url=http://re.jrc.ec.europa.eu/energyefficiency/html/standby_initiative_data_centers.htm |title=EU Code of Conduct for Data Centres |publisher=Re.jrc.ec.europa.eu |date= |accessdate=2013-08-30}}</ref>
 
===Energy use analysis===
Often, the first step toward curbing energy use in a data center is to understand how energy is being used in the data center. Multiple types of analysis exist to measure data center energy use. Aspects measured include not just energy used by IT equipment itself, but also by the data center facility equipment, such as chillers and fans.<ref>Sweeney, Jim. "Reducing Data Center Power and Energy Consumption: Saving Money and 'Going Green,' " GTSI Solutions, pages 2–3. [http://www.gtsi.com/cms/documents/white-papers/green-it.pdf]</ref>
 
===Power and cooling analysis===
Power is the largest recurring cost to the user of a data center.<ref name=DRJ_Choosing>{{Citation
| title = Choosing a Data Center
| url = http://www.atlantic.net/images/pdf/choosing_a_data_center.pdf
| publisher = Disaster Recovery Journal
| year = 2009
| author = Cosmano, Joe
| accessdate = 2012-07-21
}}</ref>  A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures.<ref>Needle, David. “HP's Green Data Center Portfolio Keeps Growing,” InternetNews, July 25, 2007. [http://www.internetnews.com/xSP/article.php/3690651/HPs+Green+Data+Center+Portfolio+Keeps+Growing.htm]</ref> A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center.  Power cooling density is a measure of how much square footage the center can cool at maximum capacity.<ref name=Inc_Howtochoose>{{Citation
| title = How to Choose a Data Center
| url = http://www.inc.com/guides/2010/11/how-to-choose-a-data-center_pagen_2.html
| year = 2010
| author = Inc. staff
| accessdate = 2012-07-21
}}</ref>
 
===Energy efficiency analysis===
An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center’s power use effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics.<ref>Siranosian, Kathryn. “HP Shows Companies How to Integrate Energy Management and Carbon Reduction,” TriplePundit, April 5, 2011. [http://www.triplepundit.com/2011/04/hp-launches-program-companies-integrate-manage-energy-carbon-reduction-strategies/]</ref>
 
===Computational fluid dynamics (CFD) analysis===
{{main|Computational fluid dynamics}}
 
This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling.<ref>Bullock, Michael. “Computation Fluid Dynamics - Hot topic at Data Center World,” Transitional Data Services,” March 18, 2010. [http://blog.transitionaldata.com/aggregate/bid/37840/Seeing-the-Invisible-Data-Center-with-CFD-Modeling-Software]</ref> By predicting the effects of these environmental conditions, CFD analysis in the data center can be used to predict the impact of high-density racks mixed with low-density racks<ref>Bouley, Dennis (editor). “Impact of Virtualization on Data Center Physical Infrastructure,” The Green grid, 2010. [http://www.thegreengrid.org/~/media/WhitePapers/White_Paper_27_Impact_of_Virtualization_Data_On_Center_Physical_Infrastructure_020210.pdf?lang=en]</ref> and the onward impact on cooling resources, poor infrastructure management practices and AC failure of AC shutdown for scheduled maintenance.
 
===Thermal zone mapping===
Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.<ref>Fontecchio, Mark. “HP Thermal Zone Mapping plots data center hot spots,” SearchDataCenter, July 25, 2007. [http://searchdatacenter.techtarget.com/news/1265634/HP-Thermal-Zone-Mapping-plots-data-center-hot-spots]</ref>
 
This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units.
 
===Green datacenters===
Datacenters use a lot of power, consumed by two main usages: the power required to run the actual equipment and then the power required to cool the equipment. The first category is addressed by designing computers and storage systems that are more and more power-efficient. And to bring down the cooling costs datacenter designers try to use natural ways to cool the equipment. Many datacenters have to be located near people-concentrations to manage the equipment, but there are also many circumstances where the datacenter can be miles away from the users and don't need a lot of local management. Examples of this are the 'mass' datacenters like Google or Facebook: these DC's are built around many standarised servers and storage-arrays and the actual users of the systems are located all around the world. After the initial build of a datacenter there is not much staff required to keep it running: especially datacenters that provide mass-storage or computing power don't need to be near population centers. Datacenters in arctic locations where outside air provides all cooling are getting more popular as cooling and electricity are the two main variable cost components.<ref>Gizmag [http://www.gizmag.com/fjord-cooled-data-center/20938/ Fjord-cooled DC in Norway claims to be greenest], 23 December 2011. Visited: 1 April 2012</ref>
 
==Network infrastructure==
[[File:Paris servers DSC00190.jpg|thumb|left|An example of "rack mounted" servers]]
Communications in data centers today are most often based on [[computer network|networks]] running the [[Internet protocol|IP]] [[protocol (computing)|protocol]] suite. Data centers contain a set of [[Router (computing)|router]]s and [[Network switch|switch]]es that transport traffic between the servers and to the outside world. [[Redundancy (engineering)|Redundancy]] of the Internet connection is often provided by using two or more upstream service providers (see [[Multihoming]]).
 
Some of the servers at the data center are used for running the basic [[Internet]] and [[intranet]] services needed by internal users in the organization, e.g., [[e-mail]] servers, [[proxy server]]s, and [[Domain Name System|DNS]] servers.
 
Network security elements are also usually deployed: [[firewall (networking)|firewalls]], [[VPN]] [[Gateway (computer networking)|gateways]], [[intrusion detection system]]s, etc. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.


==Data center infrastructure management==
What follows is a list of other considerations made during the design and implementation phase.
[[Data center infrastructure management]] (DCIM) is the integration of [[information technology]] (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center's critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM enables common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.
[[File:Zinc whiskers.jpg|thumb|right|220px|Data center designers and operators must be cognizant of metal whisker growth on galvanized metal surfaces.]]
[[File:Datacenter Backup Batteries.jpg|thumb|right|220px|A bank of batteries in a large data center, used to provide power until diesel generators can start]]
[[File:FM200 Three.jpg|thumb|right|220px|FM200 Fire Suppression Tanks]]
{| class="wikitable collapsible" style="padding:5px;" border="1" cellpadding="5" cellspacing="0"
|-
  ! colspan="2"| Data center design considerations
|-
  ! style="color:brown; background-color:#ffffee; padding:5px;"| Consideration
  ! style="color:brown; background-color:#ffffee; padding:5px;"| Description
|-
  | style="padding:5px;" |'''Design programming'''
  | style="background-color:white; padding:5px;" |Architecture of the building aside, three additional considerations to design programming data centers are facility topology design (space planning), engineering infrastructure design (mechanical systems such as cooling and electrical systems including power), and technology infrastructure design (cable plant). Each is influenced by performance assessments and modelling to identify gaps pertaining to the operator's performance desires over time.<ref name="CCDA640-864" />
|-
  | style="padding:5px;" |'''Design modeling and recommendation'''
  | style="background-color:white; padding:5px;" |Modeling criteria are used to develop future-state scenarios for space, power, cooling, and costs.<ref name="MullinsRomo">{{cite web |url=http://www.networkcomputing.com/data-centers/romonet-offers-predictive-modeling-tool-for-data-center-planning/d/d-id/1232857? |title=Romonet Offers Predictive Modelling Tool For Data Center Planning |author=Mullins, Robert |work=Information Week Network Computing |publisher=UBM Tech |date=29 June 2011 |accessdate=26 August 2014}}</ref> Based on previous design reviews and the modeling results, recommendations on power, cooling capacity, and resiliency level can be made. Additionally, availability expectations may also be reviewed.
|-
  | style="padding:5px;" |'''Conceptual and detail design'''
  | style="background-color:white; padding:5px;" |A conceptual design combines the results of design recommendation with "what if" scenarios to ensure all operational outcomes are met in order to future-proof the facility, including the addition of modular expansion components that can be constructed, moved, or added to quickly as needs change. This process yields a proof of concept, which is then incorporated into a detail design that focuses on creating the facility schematics, construction documents, and IT infrastructure design and documentation.<ref name="CCDA640-864" />
|-
  | style="padding:5px;" |'''Mechanical and electrical engineering infrastructure design'''
  | style="background-color:white; padding:5px;" |Mechanical engineering infrastructure design addresses mechanical systems involved in maintaining the interior environment of a data center, while electrical engineering infrastructure design focuses on designing electrical configurations that accommodate the data center's various services and reliability requirements.<ref name="JewBISCI">{{cite web |url=http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26 |title=BICSI Data Center Standard: A Resource for Today’s Data Center Operators and Designers |author=Jew, Jonathan |work=BICSI News Magazine |publisher=BISCI |date=May/June 2010 |pages=26–30 |accessdate=26 August 2014}}</ref>


Depending on the type of implementation, DCIM products can help data center managers identify and eliminate sources of risk to increase availability of critical IT systems. DCIM products also can be used to identify interdependencies between facility and IT infrastructures to alert the facility manager to gaps in system redundancy, and provide dynamic, holistic benchmarks on power consumption and efficiency to measure the effectiveness of “green IT” initiatives.
Both phases involve recognizing the need to save space, energy, and costs. Availability expectations are fully considered as part of these savings. For example, if the estimated cost of downtime within a specified time unit exceeds the amortized capital costs and operational expenses, a higher level of availability should be factored into the design. If, however, the cost of avoiding downtime greatly exceeds the cost of downtime itself, a lower level of availability will likely get factored into the design.<ref name="ClarkPrice">{{cite web |url=http://www.datacenterjournal.com/design/the-price-of-data-center-availability/ |title=The Price of Data Center Availability |author=Clark, Jeffrey |work=The Data Center Journal |date=12 October 2011 |accessdate=26 August 2014}}</ref>
|-
  | style="padding:5px;" |'''Technology infrastructure design'''
  | style="background-color:white; padding:5px;" |Numerous cabling systems for data center environments exist, including horizontal cabling; voice, modem, and facsimile telecommunications services; premises switching equipment; computer and telecommunications management connections; monitoring station connections; and data communications. Technology infrastructure design addresses all of these systems.<ref name="CCDA640-864" />
|-
  | style="padding:5px;" |'''Site selection'''
  | style="background-color:white; padding:5px;" |When choosing a location for a data center, aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services must be considered. Another consideration is climatic conditions, which may dictate what cooling technologies should be deployed.<ref name="TucciFive">{{cite web |url=http://searchcio.techtarget.com/news/1312614/Five-tips-on-selecting-a-data-center-location |title=Five tips on selecting a data center location |author=Tucci, Linda |work=SearchCIO |publisher=TechTarget |date=07 May 2008 |accessdate=26 August 2014}}</ref>
|-
  | style="padding:5px;" |'''Power management'''
  | style="background-color:white; padding:5px;" |The power supply to a data center must be uninterrupted. To achieve this, designers will use a combination of uninterruptible power supplies, battery banks, and/or fueled turbine generators. Static transfer switches are typically used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
<ref name="EmersonUPS">{{cite web |url=http://www.emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf |archiveurl=https://web.archive.org/web/20101122074817/http://emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf |format=PDF |title=Evaluating the Economic Impact of UPS Technology |publisher=Liebert Corporation |date=2004 |archivedate=22 November 2010 |accessdate=27 August 2014}}</ref>
|-
  | style="padding:5px;" |'''Environmental control'''
  | style="background-color:white; padding:5px;" |As heat is a byproduct of supplying power to electrical components, appropriate cooling mechanisms like air conditioning or economizer cooling — which uses outside air to cool components — must be used. In addition to cooling the air, humidity levels must be controlled to prevent both condensation from too much humidity and static electricity discharge from too little humidity.<ref name="ASHRAETherm">{{cite book |url=http://books.google.com/books?id=AgWcMQEACAAJ |title=Thermal Guidelines for Data Processing Environments |author=ASHRAE Technical Committee 9.9 |publisher=American Society of Heating, Refrigerating and Air-Conditioning Engineers |pages=136 |edition=3rd |year=2012 |isbn=9781936504336 |accessdate=27 August 2014}}</ref>  Additional considerations are made for using raised flooring to cater for better and uniform air distribution and circulation as well as providing cabling space.<ref name="NEBSFloor">{{cite web |url=http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=GR-2930& GR-2930 |title=NEBS: Raised Floor Generic Requirements for Network and Data Centers |publisher=Telcordia Technologies, Inc |date=July 2012 |accessdate=27 August 2014}}</ref> However, the concern of zinc whisker build-up on those raised floor tiles and other metal racks must also be addressed; they risk short circuiting electrical components after being dislodged during maintenance and installs.<ref name="NASAWhisk">{{cite web |url=http://nepp.nasa.gov/whisker/other_whisker/index.htm |title=Other Metal Whiskers |work=Tin Whisker (and Other Metal Whisker) Homepage |publisher=NASA |date=24 January 2011 |accessdate=27 August 2014}}</ref>
|-
  | style="padding:5px;" |'''Fire protection'''
  | style="background-color:white; padding:5px;" |Like most other buildings, fire protection and prevention systems are required. However, due to the investment in and criticality of the data center, extra measures are designed into the facility and its operation. High-sensitivity aspirating smoke detectors can be connected to several types of alarms, allowing for a soft warning alarm for technicians to investigate at one threshold and the activation of fire suppression systems at another threshold. Aside from manual fire extinguishers, gas-based or clean agent fire suppression systems provide fire suppression without the possibility of ruining more equipment than necessary using water-based systems, which are still used for the most critical of situations.<ref name="FacilitiesDC">{{cite web |url=http://www.facilitiesnet.com/datacenters/article/A-Comprehensive-Approach-To-Data-Center-Fire-Safety--14593 |title=Data Center Fire Protection |author=Tubbs, Jeffrey; DiSalvo, Garr; Neviackas, Andrew |work=FacilitiesNet |publisher=Trade Press |date=December 2013 |accessdate=27 August 2014}}</ref> Passive fire wall materials are also installed to restrict fire to only a portion of the facility.
|-
  | style="padding:5px;" |'''Physical security'''
  | style="background-color:white; padding:5px;" |Depending on the sensitivity of information contained in the data center, physical access controls are used to prevent unauthorized entry into the facility. Small posts of bollards may be placed to prevent vehicles or carts above a certain size from passing. Access control vestibules or  "man traps" with two sets of interlocking doors may be used to force verification of credentials before admittance into the center. Video cameras, security guards, and fingerprint recognition devices may also be applied as part of a security protocol.<ref name="SAS70Security">{{cite web |url=http://www.sas70.us.com/industries/data-center-colocations.php |title=Effective Data Center Physical Security Best Practices for SAS 70 Compliance |publisher=NDB LLP |date=2008 |accessdate=27 August 2014}}</ref>
|-
|}


Measuring and understanding important data center efficiency metrics. A lot of the discussion in this area has focused on energy issues, but other metrics beyond the PUE can give a more detailed picture of the data center operations. Server, storage, and staff utilization metrics can contribute to a more complete view of an enterprise data center. In many cases, disc capacity goes unused and in many instances the organizations run their servers at 20% utilization or less.<ref>{{cite web|url= http://content.dell.com/us/en/enterprise/d/large-business/measure-data-center-efficiency.aspx |title= Measuring Data Center Efficiency: Easier Said Than Done|publisher=Dell.com | accessdate=2012-06-25}}</ref> More effective automation tools can also improve the number of servers or virtual machines that a single admin can handle.
==Carbon footprint==
[[File:Google Data Center, The Dalles.jpg|thumb|right|320px|Google Data Center, The Dalles, Oregon]]


DCIM providers are increasingly linking with [[computational fluid dynamics]] providers to predict complex airflow patterns in the data center. The CFD component is necessary to quantify the impact of planned future changes on cooling resilience, capacity and efficiency.<ref>http://www.gartner.com/it-glossary/computational-fluid-dynamic-cfd-analysis</ref>
Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.<ref name="DCDeptEnergy">{{cite web |url=http://www1.eere.energy.gov/femp/program/dc_energy_consumption.html |archiveurl=https://web.archive.org/web/20120222115559/http://www1.eere.energy.gov/femp/program/dc_energy_consumption.html |title=Data Center Energy Consumption Trends |publisher=U.S. Department of Energy |date=30 May 2009 |archivedate=22 February 2012 |accessdate=27 August 2014}}</ref> For power-dense facilities, electricity costs are a dominant operating expense; in 2007 electricity purchases accounted for over 10 percent of the total cost of ownership of a data center.<ref name="KoomeyAssessing">{{cite web |url=http://www.intel.com/assets/pdf/general/servertrendsreleasecomplete-v25.pdf |format=PDF |title=Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers |author=Koomey, Jonathan G. ; Belady, Christian; Patterson, Michael; Santos, Anthony; Lange, Klaus-Dieter |publisher=Intel Corporation |date=17 August 2009 |accessdate=27 August 2014}}</ref> On a larger, national scale, the U.S. Environmental Protection Agency (EPA) found 1.5 percent of the country's power supply (61 billion kilowatt-hours) was used to power data centers in 2006. The following year in Western Europe, 56 terawatt-hours of energy fed data centers, with the European Union estimating that number to nearly double by 2020.<ref name="APCCarbonFoot">{{cite web |url=http://ensynch.com/content/dam/insight/en_US/pdfs/apc/apc-estimating-data-centers-carbon-footprint.pdf |format=PDF |title=Estimating a Data Center's Electrical Carbon Footprint |author=Bouley, Dennis |publisher=Schneider Electric |date=2010 |accessdate=27 August 2014}}</ref>


==Applications==
As with any other power consumption, a data center's power consumption has an associated carbon footprint associated with it. That footprint is generally defined as "the carbon emissions equivalent of the total amount of electricity a particular data center consumes."<ref name="APCCarbonFoot" /> Data centers that use numerous diesel generators for backup power stand to increase that carbon footprint even further.<ref name="GlanzGobble">{{cite web |url=http://www.nytimes.com/2012/09/24/technology/data-centers-in-rural-washington-state-gobble-power.html |title=Data Barns in a Farm Town, Gobbling Power and Flexing Muscle |author=Glanz, James |work=The New York Times |date=23 September 2012 |accessdate=27 August 2014}}</ref> However, in the U.S. at least, new non-mobile diesel generators will need to comply with new EPA Tier 4 Final requirements of reduced emissions and toxins by 2015<ref name="Tier4">{{cite web |url=http://tier4answers.com/tier-4-answers |title=Answers to Your Tier 4 Questions |publisher=Cummins Power Generation, Inc |accessdate=27 August 2014}}</ref>
[[File:IBMPortableModularDataCenter.jpg|thumb|right|A 40-foot [[Portable Modular Data Center]]]]


The main purpose of a data center is running the applications that handle the core business and operational data of the organization. Such systems may be proprietary and developed internally by the organization, or bought from [[enterprise software]] vendors. Such common applications are [[Enterprise resource planning|ERP]] and [[Customer relationship management|CRM]] systems.
One way designers and operators of data centers can track energy use is through an energy efficiency analysis, which measures factors such as a data center's power use effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics.<ref name="PUEGoogle">{{cite web |url=http://www.google.com/about/datacenters/efficiency/internal/ |title=Efficiency: How we do it |work=Google Data Centers |publisher=Google |accessdate=27 August 2014}}</ref> The average data center in the US has a PUE of 2.0<ref name="ESRepo">{{cite web |url=http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf |format=PDF |title=Report to Congress on Server and Data Center Energy Efficiency |publisher=ENERGY STAR Program, EPA |date=07 August 2007 |accessdate=27 August 2014}}</ref>, meaning that the facility uses one watt of overhead power for every watt delivered to IT equipment. Energy efficiency can also be analyzed through the use of a power and cooling analysis. This can help identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center.


A data center may be concerned with just [[operations architecture]] or it may provide other services as well.
The building's location is also a factor that affects the energy consumption and environmental effects of a data center. In areas where climate favors cooling and lots of renewable electricity is available, the environmental footprint will be more moderate. Thus countries with such favorable conditions — like Canada<ref name="LadurantayeCanada">{{cite web |url=http://www.theglobeandmail.com/report-on-business/canada-called-prime-real-estate-for-massive-data-computers/article2071677/ |title=Canada Called Prime Real Estate for Massive Data Computers |work=The Globe & Mail |author=Ladurantaye, Steve |date=22 June 2011 |accessdate=27 August 2014}}</ref>, Finland<ref name="FinCloud">{{cite web |url=http://www.fincloud.freehostingcloud.com/ |archiveurl=https://web.archive.org/web/20130110111357/http://www.fincloud.freehostingcloud.com/ |title=Finland - First Choice for Siting Your Cloud Computing Data Center |publisher=Invest in Finland |archivedate=10 January 2013 |accessdate=27 August 2014}}</ref>, Sweden<ref name="StockDC">{{cite web |url=http://www.stockholmbusinessregion.se/templates/page____41724.aspx?epslanguage=EN |archiveurl=https://web.archive.org/web/20100819190918/http://www.stockholmbusinessregion.se/templates/page____41724.aspx?epslanguage=EN |title=Stockholm sets sights on data center customers |publisher=Stockholm Business Region |date=19 August 2010 |archivedate=19 August 2010 |accessdate=27 August 2014}}</ref>, and Switzerland<ref name="WheelandDC">{{cite web |url=http://www.greenbiz.com/news/2010/06/30/swiss-carbon-neutral-servers-hit-cloud |title=Swiss Carbon-Neutral Servers Hit the Cloud |author=Wheeland, Matthew |publisher=GreenBiz Group |date=30 June 2010 |accessdate=27 August 2014}}</ref> — are trying to attract cloud computing data centers. Meanwhile, companies like Apple have tried to lessen their data centers' carbon footprints by siting new data centers in places like the more arid U.S. state of Nevada.<ref name="AppleFoot">{{cite web |url=http://www.wired.com/2014/04/green-apple/ |title=Apple Aims to Shrink Its Carbon Footprint With New Data Centers |author=Levy, Steven |work=Wired |publisher=Condé Nast |date=21 April 2014 |accessdate=27 August 2014}}</ref>


Often these applications will be composed of multiple hosts, each running a single component. Common components of such applications are [[database]]s, [[file server]]s, [[application server]]s, [[middleware]], and various others.
===Efficiency ratings===
In the U.S., the EPA began offering in 2010 an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center had to be within the top quartile of energy efficiency of all reported facilities.<ref name="PouchetStar">{{cite web |url=http://www.efficientdatacenters.com/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx |archiveurl=https://web.archive.org/web/20120229143352/http://www.efficientdatacenters.com/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx |title=Introducing EPA ENERGY STAR for Data Centers |author=Pouchet, Jack |work=Efficient Data Centers |publisher=Emerson Electric Co |date=15 June 2010 |archivedate=29 February 2012 |accessdate=27 August 2014}}</ref> The European Union also developed a similar initiative: The European Code of Conduct for Energy Efficiency in Data Centre.<ref name="JRCDC">{{cite web |url=http://iet.jrc.ec.europa.eu/energyefficiency/ict-codes-conduct/data-centres-energy-efficiency |title=Data Centres Energy Efficiency |publisher=Joint Research Centre, Institute for Energy and Transport |date=22 August 2014 |accessdate=27 August 2014}}</ref>


Data centers are also used for off site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with [[Tape drive|backup tapes]]. Backups can be taken off servers locally on to tapes. However, tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center. Encrypted backups can be sent over the Internet to another data center where they can be stored securely.
==Notes==


For quick deployment or [[disaster recovery]], several large hardware vendors have developed mobile solutions that can be installed and made operational in very short time. Companies such as [[Cisco Systems]],<ref>{{cite web|title=Info and video about Cisco's solution|url=http://www.datacenterknowledge.com/archives/2008/May/15/ciscos_mobile_emergency_data_center.html|publisher=Datacentreknowledge|accessdate=2008-05-11|date=May 15, 2007}}</ref> [[Sun Microsystems]] ([[Sun Modular Datacenter]]),<ref>{{cite web|url=http://www.sun.com/products/sunmd/s20/specifications.jsp|archiveurl=http://web.archive.org/web/20080513090300/http://www.sun.com/products/sunmd/s20/specifications.jsp|archivedate=2008-05-13|title=Technical specs of Sun's Blackbox|accessdate=2008-05-11}}</ref><ref>And English Wiki article on [[Sun Modular Datacenter|Sun's modular datacentre]]</ref> [[Groupe Bull|Bull]] (mobull),<ref>{{cite web|title=Mobull Plug and Boot Datacenter|url=http://www.bull.com/extreme-computing/mobull.html|publisher=Bull|first=Daniel|last=Kidger|accessdate=2011-05-24}}</ref> [[IBM]] ([[Portable Modular Data Center]]), [[HP]] ([[HP Performance Optimized Datacenter|Performance Optimized Datacenter]]),<ref>{{cite web|url=http://h18004.www1.hp.com/products/servers/solutions/datacentersolutions/pod/index.html |title=HP Performance Optimized Datacenter (POD) 20c and 40c - Product Overview |publisher=H18004.www1.hp.com |date= |accessdate=2013-08-30}}</ref> [[Huawei]] (Container Data Center Solution),<ref>{{cite web|title=Huawei's Container Data Center Solution|url=http://www.huawei.com/ilink/enenterprise/download/HW_143893|publisher=Huawei|accessdate=2014-05-17}}
This article reuses numerous content elements from [http://en.wikipedia.org/wiki/Data_center the Wikipedia article].
</ref> and [[Google]] ([[Google Modular Data Center]]) have developed systems that could be used for this purpose.<ref>{{cite web|url=http://www.crn.com/hardware/208403225|publisher=ChannelWeb|accessdate=2008-05-11|title=IBM's Project Big Green Takes Second Step|first=Brian|last=Kraemer|date=June 11, 2008}}</ref><ref>{{cite web|url=http://hightech.lbl.gov/documents/data_centers/modular-dc-procurement-guide.pdf |title=Modular/Container Data Centers Procurement Guide: Optimizing for Energy Efficiency and Quick Deployment |format=PDF |date= |accessdate=2013-08-30}}</ref>


==References==
==References==

Latest revision as of 22:37, 27 August 2014

A team from Intel reviews the inner workings of Facebook's first built-from-scratch data center in 2013.

A data center is a facility used to house computer systems and associated components such as telecommunications and storage systems. A data center generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g. air conditioning and fire suppression), and various security devices. Data centers vary in size[1], with some large data centers capable of using as much electricity as a "medium-size town."[2]

The main purpose of many data centers is for running applications that handle the core business and operational data of one or more organizations. Often those applications will be composed of multiple hosts, each running a single component. Common components of such applications include databases, file servers, application servers, and middleware. Such systems may be proprietary and developed internally by the organization or bought from enterprise software vendors. However, a data center may also solely be concerned with operations architecture or other services.

Data centers are also used for off-site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with backup tapes. Backups can be taken off servers locally on to tapes. However, tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center.[3] Encrypted backups can be sent over the Internet to another data center where they can be stored securely.

History

Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). Also, a single mainframe required a great deal of power and had to be cooled to avoid overheating. Security was important; computers were expensive, and were often used for military purposes.[4]

During the boom of the microcomputer industry, especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources.[5] With the advent of Linux and the subsequent proliferation of freely available Unix-compatible PC operating systems during the 1990s, as well as MS-DOS finally giving way to a multi-tasking capable Windows operating system, personal computers started to replace the older systems found in computer rooms.[4][6] These were called "servers," as time sharing operating systems like Unix rely heavily on the client-server model to facilitate sharing unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition.[7]

The dot-com boom of the late '90s and early '00s brought about significant investment into what would be called Internet data centers (IDCs). As these grew in size, new technologies and practices were designed to better handle the scale and operational requirements of these facilities. These practices eventually migrated toward private data centers and were adopted largely because of their practical results.[8][4] As cloud computing became more prominent in the 2000s, business and government organizations scrutinized data centers to a higher degree in areas such as security, availability, environmental impact, and adherence to standards.[8]

Standards and design guidelines

Racks of telecommunications equipment in part of a data center

IT operations are a crucial aspect of most organizations' business continuity, often relying on their information systems to run business operations. Organizations thus depend on reliable infrastructure for IT operations in order to minimize any chance of disruption caused by power failure and/or security breach. That reliable infrastructure is normally built on a sound set of widely accepted standards and design guidelines.

The Telecommunications Industry Association's (TIA's) Telecommunications Infrastructure Standard for Data Centers (ANSI/TIA-942) is one such example, specifying "the minimum requirements for the telecommunications infrastructure" of organizations large and small, from large "multi-tenant Internet hosting data centers" to smaller "single-tenant enterprise data centers."[9] Up until early 2014, TIA also ranked data centers from Tier 1, essentially a server room, to Tier 4, which hosts mission-critical computer systems with fully redundant subsystems and compartmentalized security zones controlled by biometric access controls methods.[9]

The Uptime Institute, "an unbiased, third-party data center research, education, and consulting organization,"[10] also has a four-tier standard (described in their document Tier Standard: Operational Sustainability) that describes the availability of data from the hardware at a data center, with the higher tiers offering greater availability.[11] The institute, however, at times voiced discontent with TIA's use of tiers in its standard, and in March 2014 TIA announced it would remove the word "tier" from its ANSI/TIA-942 standard.[12]

Other standards and guidelines for data center planning, installation, and operation include:

  • BICSI's (Building Industry Consulting Services International's) ANSI/BICSI 002-2011: a data center standard which integrates information and standards from other entities[13]
  • OVE's ÖVE/ÖNORM EN 50600-1: a developing European standard for "data centre facilities and infrastructures"[14][15]
  • Telcordia's NEBS (Network Equipment - Building System) documents: telecommunication and environmental design guidelines for data center spaces[16]
  • eco's Datacenter Star Audit (DCSA): auditing documents that allow IT personnel to assess the functionality of planned or operational data centers[17]
  • American Institute of CPAs' (AICPA's) SOC (Service Organization Control) Reports: auditing reports which provide "a standard benchmark by which two data center audits can be compared against the same set of criteria"[18][19]
  • International Organization for Standardization's (ISO's) various standards: optimal operation of a data center is dictated by several ISO standards, including ISO 14001:2004 Environmental Management System Standard, ISO / IEC 27001:2005 Information Security Management System Standard, and ISO 50001:2011 Energy Management System Standard[20][21]

Design considerations

A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19-inch rack cabinets, which are usually placed in single rows forming corridors or aisles between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from rack units to large freestanding storage silos which occupy many square feet of floor space. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves and are placed alongside them. Local building codes may govern minimum ceiling heights.

What follows is a list of other considerations made during the design and implementation phase.

Data center designers and operators must be cognizant of metal whisker growth on galvanized metal surfaces.
A bank of batteries in a large data center, used to provide power until diesel generators can start
FM200 Fire Suppression Tanks
Data center design considerations
Consideration Description
Design programming Architecture of the building aside, three additional considerations to design programming data centers are facility topology design (space planning), engineering infrastructure design (mechanical systems such as cooling and electrical systems including power), and technology infrastructure design (cable plant). Each is influenced by performance assessments and modelling to identify gaps pertaining to the operator's performance desires over time.[1]
Design modeling and recommendation Modeling criteria are used to develop future-state scenarios for space, power, cooling, and costs.[22] Based on previous design reviews and the modeling results, recommendations on power, cooling capacity, and resiliency level can be made. Additionally, availability expectations may also be reviewed.
Conceptual and detail design A conceptual design combines the results of design recommendation with "what if" scenarios to ensure all operational outcomes are met in order to future-proof the facility, including the addition of modular expansion components that can be constructed, moved, or added to quickly as needs change. This process yields a proof of concept, which is then incorporated into a detail design that focuses on creating the facility schematics, construction documents, and IT infrastructure design and documentation.[1]
Mechanical and electrical engineering infrastructure design Mechanical engineering infrastructure design addresses mechanical systems involved in maintaining the interior environment of a data center, while electrical engineering infrastructure design focuses on designing electrical configurations that accommodate the data center's various services and reliability requirements.[23]

Both phases involve recognizing the need to save space, energy, and costs. Availability expectations are fully considered as part of these savings. For example, if the estimated cost of downtime within a specified time unit exceeds the amortized capital costs and operational expenses, a higher level of availability should be factored into the design. If, however, the cost of avoiding downtime greatly exceeds the cost of downtime itself, a lower level of availability will likely get factored into the design.[24]

Technology infrastructure design Numerous cabling systems for data center environments exist, including horizontal cabling; voice, modem, and facsimile telecommunications services; premises switching equipment; computer and telecommunications management connections; monitoring station connections; and data communications. Technology infrastructure design addresses all of these systems.[1]
Site selection When choosing a location for a data center, aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services must be considered. Another consideration is climatic conditions, which may dictate what cooling technologies should be deployed.[25]
Power management The power supply to a data center must be uninterrupted. To achieve this, designers will use a combination of uninterruptible power supplies, battery banks, and/or fueled turbine generators. Static transfer switches are typically used to ensure instantaneous switchover from one supply to the other in the event of a power failure.

[26]

Environmental control As heat is a byproduct of supplying power to electrical components, appropriate cooling mechanisms like air conditioning or economizer cooling — which uses outside air to cool components — must be used. In addition to cooling the air, humidity levels must be controlled to prevent both condensation from too much humidity and static electricity discharge from too little humidity.[27] Additional considerations are made for using raised flooring to cater for better and uniform air distribution and circulation as well as providing cabling space.[28] However, the concern of zinc whisker build-up on those raised floor tiles and other metal racks must also be addressed; they risk short circuiting electrical components after being dislodged during maintenance and installs.[29]
Fire protection Like most other buildings, fire protection and prevention systems are required. However, due to the investment in and criticality of the data center, extra measures are designed into the facility and its operation. High-sensitivity aspirating smoke detectors can be connected to several types of alarms, allowing for a soft warning alarm for technicians to investigate at one threshold and the activation of fire suppression systems at another threshold. Aside from manual fire extinguishers, gas-based or clean agent fire suppression systems provide fire suppression without the possibility of ruining more equipment than necessary using water-based systems, which are still used for the most critical of situations.[30] Passive fire wall materials are also installed to restrict fire to only a portion of the facility.
Physical security Depending on the sensitivity of information contained in the data center, physical access controls are used to prevent unauthorized entry into the facility. Small posts of bollards may be placed to prevent vehicles or carts above a certain size from passing. Access control vestibules or "man traps" with two sets of interlocking doors may be used to force verification of credentials before admittance into the center. Video cameras, security guards, and fingerprint recognition devices may also be applied as part of a security protocol.[31]

Carbon footprint

Google Data Center, The Dalles, Oregon

Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.[32] For power-dense facilities, electricity costs are a dominant operating expense; in 2007 electricity purchases accounted for over 10 percent of the total cost of ownership of a data center.[33] On a larger, national scale, the U.S. Environmental Protection Agency (EPA) found 1.5 percent of the country's power supply (61 billion kilowatt-hours) was used to power data centers in 2006. The following year in Western Europe, 56 terawatt-hours of energy fed data centers, with the European Union estimating that number to nearly double by 2020.[34]

As with any other power consumption, a data center's power consumption has an associated carbon footprint associated with it. That footprint is generally defined as "the carbon emissions equivalent of the total amount of electricity a particular data center consumes."[34] Data centers that use numerous diesel generators for backup power stand to increase that carbon footprint even further.[35] However, in the U.S. at least, new non-mobile diesel generators will need to comply with new EPA Tier 4 Final requirements of reduced emissions and toxins by 2015[36]

One way designers and operators of data centers can track energy use is through an energy efficiency analysis, which measures factors such as a data center's power use effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics.[37] The average data center in the US has a PUE of 2.0[38], meaning that the facility uses one watt of overhead power for every watt delivered to IT equipment. Energy efficiency can also be analyzed through the use of a power and cooling analysis. This can help identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center.

The building's location is also a factor that affects the energy consumption and environmental effects of a data center. In areas where climate favors cooling and lots of renewable electricity is available, the environmental footprint will be more moderate. Thus countries with such favorable conditions — like Canada[39], Finland[40], Sweden[41], and Switzerland[42] — are trying to attract cloud computing data centers. Meanwhile, companies like Apple have tried to lessen their data centers' carbon footprints by siting new data centers in places like the more arid U.S. state of Nevada.[43]

Efficiency ratings

In the U.S., the EPA began offering in 2010 an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center had to be within the top quartile of energy efficiency of all reported facilities.[44] The European Union also developed a similar initiative: The European Code of Conduct for Energy Efficiency in Data Centre.[45]

Notes

This article reuses numerous content elements from the Wikipedia article.

References

  1. 1.0 1.1 1.2 1.3 Bruno, Anthony; Jordan, Steve (2011). Official Cert Guide: CCDA 640-864 (4th ed.). Cisco Press. p. 130. ISBN 9780132372145. http://books.google.com/books?id=cuOoV5u3WCUC&pg=PA130. Retrieved 26 August 2014. 
  2. Glanz, James (22 September 2012). "Power, Pollution and the Internet". The New York Times. http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html. Retrieved 26 August 2014. 
  3. Geer, David (25 July 2012). "4 Steps For Secure Tape Backups". InformationWeek. UBM Tech. http://www.informationweek.com/4-steps-for-secure-tape-backups/d/d-id/1105506?. Retrieved 27 August 2014. 
  4. 4.0 4.1 4.2 Bartels, Angela (31 August 2011). "[INFOGRAPHIC Data Center Evolution: 1960 to 2000"]. The Rackspace Blog! & Newsroom. Rackspace, US Inc. http://www.rackspace.com/blog/datacenter-evolution-1960-to-2000/. Retrieved 26 August 2014. 
  5. Murray, John P. (26 October 1981). "Data Center Problems: You're Not Alone". Computerworld. http://books.google.com/books?id=1REkdf3I86oC&pg=PA35. Retrieved 26 August 2014. 
  6. Barton, Jim (1 October 2003). "From Server Room to Living Room". Queue. Association for Computing Machinery. http://queue.acm.org/detail.cfm?id=945076. Retrieved 26 August 2014. 
  7. Axelrod, C. Warren; Blanding, Steve (ed.) (1998). "Enterprise Operations Management Handbook". CRC Press. pp. 75–84. ISBN 9781420052169. http://books.google.com/books?id=XrEwl7VWXXMC&pg=PA75. Retrieved 26 August 2014. 
  8. 8.0 8.1 Katz, Randy H (1 February 2009). "Tech Titans Building Boom". IEEE Spectrum. IEEE. http://spectrum.ieee.org/green-tech/buildings/tech-titans-building-boom. Retrieved 26 August 2014. 
  9. 9.0 9.1 "TIA-942: Telecommunications Infrastructure Standard for Data Centers". Telecommunications Industry Association. 1 March 2014. https://global.ihs.com/doc_detail.cfm?&rid=TIA&input_doc_number=TIA-942&item_s_key=00414811&item_key_date=860905&input_doc_number=TIA-942&input_doc_title=#abstract. Retrieved 26 August 2014. 
  10. "About Uptime Institute". 451 Group, Inc. http://uptimeinstitute.com/about-us. Retrieved 26 August 2014. 
  11. "Uptime Institute Publications". 451 Group, Inc. http://uptimeinstitute.com/publications. Retrieved 26 August 2014. 
  12. "TIA to remove the word ‘Tier’ from its 942 Data Center standards". Cabling Installation & Maintenance. PennWell Corporation. 18 March 2014. http://www.cablinginstall.com/articles/2014/03/tia-942-tiers.html. Retrieved 26 August 2014. 
  13. "ANSI/BICSI 002-2011, Data Center Design and Implementation Best Practices". BISCI. https://www.bicsi.org/book_details.aspx?Book=BICSI-002-CM-11-v5&d=0. Retrieved 26 August 2014. 
  14. "The European Standard EN 50600". CIS International. http://www.cis-cert.com/Pages/com/System-Zertifizierung/Data-Centers/Certification/European-Standard-EN-50600.aspx. Retrieved 26 August 2014. 
  15. "ÖVE/ÖNORM EN 50600-1:2013-06-01". Österreichischer Verband für Elektrotechnik. https://www.ove.at/webshop/artikel/9f5a9db895-ove-onorm-en-50600-1-2013-06-01.html. Retrieved 26 August 2014. 
  16. "NEBS Documents and Technical Services". Telcordia Technologies, Inc. http://telecom-info.telcordia.com/site-cgi/ido/docs2.pl?ID=239065400&page=nebs. Retrieved 26 August 2014. 
  17. "About DCSA". eco. http://www.dcaudit.com/about-dcsa.html. Retrieved 26 August 2014. 
  18. Klein, Mike (3 March 2011). "SAS 70, SSAE 16, SOC and Data Center Standards". Data Center Knowledge. iNET Interactive. http://www.datacenterknowledge.com/archives/2011/03/03/sas-70-ssae-16-soc-and-data-center-standards/. Retrieved 26 August 2014. 
  19. "SOC Reports Information for CPAs". AICPA. http://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/pages/cpas.aspx. Retrieved 26 August 2014. 
  20. "ISO Certified Data Centers". Equinix, Inc. http://www.equinix.com/solutions/by-services/colocation/standards-and-compliance/iso-certified-data-centers/. Retrieved 26 August 2014. 
  21. "Inside our data centers". Google Data Centers. Google. https://www.google.com/about/datacenters/inside/. Retrieved 26 August 2014. 
  22. Mullins, Robert (29 June 2011). "Romonet Offers Predictive Modelling Tool For Data Center Planning". Information Week Network Computing. UBM Tech. http://www.networkcomputing.com/data-centers/romonet-offers-predictive-modeling-tool-for-data-center-planning/d/d-id/1232857?. Retrieved 26 August 2014. 
  23. Jew, Jonathan (May/June 2010). "BICSI Data Center Standard: A Resource for Today’s Data Center Operators and Designers". BICSI News Magazine. BISCI. pp. 26–30. http://www.nxtbook.com/nxtbooks/bicsi/news_20100506/#/26. Retrieved 26 August 2014. 
  24. Clark, Jeffrey (12 October 2011). "The Price of Data Center Availability". The Data Center Journal. http://www.datacenterjournal.com/design/the-price-of-data-center-availability/. Retrieved 26 August 2014. 
  25. Tucci, Linda (7 May 2008). "Five tips on selecting a data center location". SearchCIO. TechTarget. http://searchcio.techtarget.com/news/1312614/Five-tips-on-selecting-a-data-center-location. Retrieved 26 August 2014. 
  26. "Evaluating the Economic Impact of UPS Technology" (PDF). Liebert Corporation. 2004. Archived from the original on 22 November 2010. https://web.archive.org/web/20101122074817/http://emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf. Retrieved 27 August 2014. 
  27. ASHRAE Technical Committee 9.9 (2012). Thermal Guidelines for Data Processing Environments (3rd ed.). American Society of Heating, Refrigerating and Air-Conditioning Engineers. pp. 136. ISBN 9781936504336. http://books.google.com/books?id=AgWcMQEACAAJ. Retrieved 27 August 2014. 
  28. GR-2930 "NEBS: Raised Floor Generic Requirements for Network and Data Centers". Telcordia Technologies, Inc. July 2012. http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=GR-2930& GR-2930. Retrieved 27 August 2014. 
  29. "Other Metal Whiskers". Tin Whisker (and Other Metal Whisker) Homepage. NASA. 24 January 2011. http://nepp.nasa.gov/whisker/other_whisker/index.htm. Retrieved 27 August 2014. 
  30. Tubbs, Jeffrey; DiSalvo, Garr; Neviackas, Andrew (December 2013). "Data Center Fire Protection". FacilitiesNet. Trade Press. http://www.facilitiesnet.com/datacenters/article/A-Comprehensive-Approach-To-Data-Center-Fire-Safety--14593. Retrieved 27 August 2014. 
  31. "Effective Data Center Physical Security Best Practices for SAS 70 Compliance". NDB LLP. 2008. http://www.sas70.us.com/industries/data-center-colocations.php. Retrieved 27 August 2014. 
  32. "Data Center Energy Consumption Trends". U.S. Department of Energy. 30 May 2009. Archived from the original on 22 February 2012. https://web.archive.org/web/20120222115559/http://www1.eere.energy.gov/femp/program/dc_energy_consumption.html. Retrieved 27 August 2014. 
  33. Koomey, Jonathan G. ; Belady, Christian; Patterson, Michael; Santos, Anthony; Lange, Klaus-Dieter (17 August 2009). "Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers" (PDF). Intel Corporation. http://www.intel.com/assets/pdf/general/servertrendsreleasecomplete-v25.pdf. Retrieved 27 August 2014. 
  34. 34.0 34.1 Bouley, Dennis (2010). "Estimating a Data Center's Electrical Carbon Footprint" (PDF). Schneider Electric. http://ensynch.com/content/dam/insight/en_US/pdfs/apc/apc-estimating-data-centers-carbon-footprint.pdf. Retrieved 27 August 2014. 
  35. Glanz, James (23 September 2012). "Data Barns in a Farm Town, Gobbling Power and Flexing Muscle". The New York Times. http://www.nytimes.com/2012/09/24/technology/data-centers-in-rural-washington-state-gobble-power.html. Retrieved 27 August 2014. 
  36. "Answers to Your Tier 4 Questions". Cummins Power Generation, Inc. http://tier4answers.com/tier-4-answers. Retrieved 27 August 2014. 
  37. "Efficiency: How we do it". Google Data Centers. Google. http://www.google.com/about/datacenters/efficiency/internal/. Retrieved 27 August 2014. 
  38. "Report to Congress on Server and Data Center Energy Efficiency" (PDF). ENERGY STAR Program, EPA. 7 August 2007. http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf. Retrieved 27 August 2014. 
  39. Ladurantaye, Steve (22 June 2011). "Canada Called Prime Real Estate for Massive Data Computers". The Globe & Mail. http://www.theglobeandmail.com/report-on-business/canada-called-prime-real-estate-for-massive-data-computers/article2071677/. Retrieved 27 August 2014. 
  40. "Finland - First Choice for Siting Your Cloud Computing Data Center". Invest in Finland. Archived from the original on 10 January 2013. https://web.archive.org/web/20130110111357/http://www.fincloud.freehostingcloud.com/. Retrieved 27 August 2014. 
  41. "Stockholm sets sights on data center customers". Stockholm Business Region. 19 August 2010. Archived from the original on 19 August 2010. https://web.archive.org/web/20100819190918/http://www.stockholmbusinessregion.se/templates/page____41724.aspx?epslanguage=EN. Retrieved 27 August 2014. 
  42. Wheeland, Matthew (30 June 2010). "Swiss Carbon-Neutral Servers Hit the Cloud". GreenBiz Group. http://www.greenbiz.com/news/2010/06/30/swiss-carbon-neutral-servers-hit-cloud. Retrieved 27 August 2014. 
  43. Levy, Steven (21 April 2014). "Apple Aims to Shrink Its Carbon Footprint With New Data Centers". Wired. Condé Nast. http://www.wired.com/2014/04/green-apple/. Retrieved 27 August 2014. 
  44. Pouchet, Jack (15 June 2010). "Introducing EPA ENERGY STAR for Data Centers". Efficient Data Centers. Emerson Electric Co. Archived from the original on 29 February 2012. https://web.archive.org/web/20120229143352/http://www.efficientdatacenters.com/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx. Retrieved 27 August 2014. 
  45. "Data Centres Energy Efficiency". Joint Research Centre, Institute for Energy and Transport. 22 August 2014. http://iet.jrc.ec.europa.eu/energyefficiency/ict-codes-conduct/data-centres-energy-efficiency. Retrieved 27 August 2014.