Thursday, February 11, 2016

Zone Distribution in Data Centers

Data cabling infrastructures must provide;

  • Reliability - Should provide 24 X 7 uptime
  • Scalability - Should support data center growth. Must accommodate the need for more bandwidth in support for future applications.   
  • Manageability- Cabling infrastructure must also accommodate changing requirements. 
  • Flexibility - Make adjustments with minimum downtime.



Zone Distribution recommended by TIA-942
ZDA - Zone Distribution Area - consolidation point for high -fiber-count cabling from the Main Distribution Area (MDA) or Horizontal Distribution Area (HDA) to regional areas or zones within the data center. 




Thursday, February 4, 2016

Data Centre Design – basic considerations



Airflow and Cooling
Arranging servers and cooling units so that cooling is efficient and effective is important. The common method of cooling is utilizing rack-mounted fans. A “hot/cold aisle” design is the most efficient method. Server racks will be arranged in long rows with space between them. On one side of the server, cool air will be sent along the “cold-isle”. Server mounted fans will draw this into the rack while on the opposite side of the row, server fans will expel the hot air into the “hot-aisle”. The hot air is then sent out of the room.


Humidification
Humidification is usually the most wasteful component of data center designs. Today's servers can tolerate a range of humidity between 20%-80%. A safe humidity control range is 30%-70%. The most sensitive equipment tends to be the magnetic tape rather than the servers themselves. An efficient humidification plan would involve a central unit and would allow for seasonal changes. 


Power Supply
Data centers consume a lot of electrical power. This power is delivered from the grid as 400 volts AC but the circuits within the IT equipment run on DC 6 -12 volts. This means the voltage has to be first reduced by a transformer and then Converted from AC to DC by a rectifier. This is usually done by a power supply unit (PSU) within each piece of IT equipment, but is highly inefficient and adds to the waste heat. Furthermore the power will probably already have been converted from AC to DC and back again within a UPS (see below) so the losses are doubled. 


Cabling Pathways
Server organization and cabling is very important for a data center. Cabling should be organized in such a way that individual racks can be disconnected easily from the system. Separation of power and data cables is essential to prevent interference. Cabling, is expected be replaced every 15-20 years.

Environmental Control in Data Center

In order for data center to perform optimally throughout the lifespan, it is necessary to maintain temperature, humidity and other physical qualities of air within a specific range.

Air Flow
Why do we need efficient air flow?
  • Improve computer cooling efficiency
  • Preventing recirculation of hot air exhausted from IT equipment
  • Reducing bypass airflow
  

Overheating often results in reduced server performance or equipment damage. Atmospheric stratification requires setting cooling equipment temperatures lower than recommended.
The various control measures can be classified into:

Temperature
Recommended: 70-750 F / 21-240 C
In environments with a high relative humidity, overcooling equipment can expose equipment to the high amount of moisture that in turn allows the growth of salt deposits on conductive filaments in the circuitry.

Rack Hygiene
To ensure that the air from the cold aisle reaches equipment intakes and prevents leakage of exhaust air into the intake area, data centers implement blanking plates and other fittings around the edge, top, floor or the rack direct air intake. Effective airflow management prevents hot spots, with is common in the top spaces of the rack and allows the temperature of cold aisles to be raised.

Aisle Containment
Containments are used to prevent cool/exhaust air mixing within server rooms. In general, cabinets face each other so that cool air reaches the equipment at the set temperature. Containments are implemented by physical separation of hot and cold aisles, using blanking panels, PVC curtains or hard panel boards. The strategies of containments depend on
  • Server Tolerance
  • Ambient Temperature Requirements
  • Leakage from data centers
  • Access Floors

A raised access floor allows under-floor cable management as well as air flow. Access floors design can be classified into:
  • Low profile floors (under 6 inches in height)
  • Standard/ traditional access floors (over 6 inches)
Data center equipment is very sensitive and susceptible to damage from excessive heat, moisture and unauthorized access. IT WatchDogs provide a full line of environmental sensors that deliver exceptional protection and alerting functions without requiring any proprietary software installations of update subscriptions.

In short, data center environmental control is a practice that aims at creating a conducive environment inside a data center in order for keeping its content safe and secured.

Equinix Data Center Tour

Equinix is an internet business exchange and colocation company. The data center on the west of Chicago is 280,000 sqft and Equinix occupies almost 2 million sqft in the US alone. In further touring, we shall talk about the data center in Chicago.



It is 3 floors, the first has the facility and infrastructure and the other 2 floors have the colocation. Security at the center is taken highly into consideration having dozens of security cameras all around. Man traps and lock doors are looked into using biometric fingerprinting technology for access into floors and rooms. The video https://www.youtube.com/watch?v=WBIl0curTxU shows how an empty room was converted into server rooms. This was not built like a typical data center. The video showed the flooring of the entire center as concrete slabs. Being an internet exchange company, Equinix has to deliver large amounts of data through networks to customer with high speed.  For this purpose, the center had overhead cables built of copper fiber coax. The entire center was designed to be modular catering to the customer’s exact needs.
The main component of any data center would be the airflow management. Equinix uses CVAC systems for airflow on the top. Cold air was driven in the cold aisles from the top. Whereas warm air was driven in the hot aisles from bottom to top. The entire east and west sides of the building, including the colocation centers were air handler rooms. In the air handler rooms, we see the warm air that is coming up is taken into the filters kept in the rooms. The filter refreshes, recools the warm air and sends them back to the aisles accordingly. The building is on an average maintained at 68-720 F. For this maintenance purpose, sensors are located in all places. If the temperature goes above, the sensors call for more cooling. The building management system in turn will vary the frequency drives. The frequency drives are available on all floors.

From the hardware side, they have eight 750 ton chillers in the air handler rooms and 8 water cooling towers on the ceiling. If the temperature outside (external temperature) falls below 450 F, the mechanical cooling is shifted to technologies using the external temperature. This makes it less expensive and this reason why Chicago can be a great location for data centers.
Coming to power, the data center on average runs on 30MW. They have installed diesel generators and each generator’s capacity is 2.5MW. At all times, 12 will be online, 3 are on reserve if any of the 12 go down. In additional, they have 2 swing generators for maintenance purpose. Generators feed the switch gears too. Hence the switch gears have 2 inputs, one from the utility and one of the generators.  


Apart from all this, the Equinix data center is very customer friendly.