Save energy and $$$
in the data center
The cabling installed today has to meet the requirements of the cabling for tomorrow.
Most data centers are planning for speeds of 10-Gigabit Ethernet.
By Eric Leichter
Data centers are basically big computer rooms that use optimized infrastructure components to support servers, storage and networking equipment devices. Such centers have grown to consume about 2 percent of the total U.S. electrical usage and are forecast to consume 9 percent of the U.S. total by the year 2020.
Much of this power is required to run the electronics and building operations, which generate a lot of heat – one of the major causes of electronic failure within the data center. IT hardware reliability is greatly reduced as temperatures rise.
To control the flow of air, many data centers have adopted a pattern of cold aisles for the electronics and hot aisles for under-floor cable routing and passive patching. Cold air can be added and hot air removed in a controlled pattern, leading to better efficiencies from the cooling equipment.
Large data centers provide up to 270 percent of the cooling needed by the equipment due to inefficient airflow management. There are many areas in which this waste can be attacked. Utilizing structured cabling will reduce the volume of cable that is congesting passages and blocking airflow. The more room there is for air to flow, the less energy is expended removing hot air and circulating cooling air. Structured cabling involves the use of backbone trunks that bring a large number of optical fibers or copper pairs to an area before breaking out into smaller segments at the electronics.
Cables that are “home run” not only add more cable mass, they also cause a problem with any moves, adds or changes. Removing a cable when it is in the cable tray with many other cables carrying live traffic is difficult. Not wanting to risk a disruption in traffic, system operators typically decide to simply pull in new cables on top of the old. This has the negative consequences of clogging up the pathways for airflow, increasing the heating, ventilation and air-conditioning workload. Utilizing a backbone trunk provides a link that does not have to be disturbed; the configuration is done at a patching field close to the electronics.
High-fiber-count optical cables have the additional benefit of a greater density than running multiples of 1-fiber and 2-fiber cables. Traditional 2.9-mm cabling for SC optical-fiber connectors takes up seven times the space of a trunk-cable solution, and even the smaller density 1.6-mm diameter cables used with LC connections occupy twice the coverage area. Loose-tube cables provide the best density among today’s trunk-cable designs.
The role of structured cabling
Although cabling is resistant to heat, with operating temperatures to 140° F (60° C), the blocking of airflow can lead to localized hot spots where temperatures are much higher than the average for the room. Utilizing structured cable to reduce the consumed space will help keep the equipment operating within acceptable margins.
Although the electronics and software often are replaced every three to five years, the cabling is expected to last much longer. Therefore, the cabling installed today has to meet the requirements of the cabling for tomorrow. Most data centers are planning for – if not already utilizing – speeds of 10-Gigabit Ethernet (GbE) now. OM3 optical fiber and Category 6A copper cabling are rated for these speeds at distances typically seen within a data center.
For data centers operating at speeds of 10/100/1,000 Mbps today, Category 6 cable follows along with the guidelines as the minimum level of cabling to install. If upgrading the network infrastructure to support 10GBASE-T in the next five years is considered, however, then putting in a higher-bandwidth Category 6A cabling should meet the needs of both today and tomorrow. This cable specification provides a high level of performance to support equipment end-points, such as servers and storage.
Many designers are looking even longer term. If the cabling is expected to last for 20 years through several generations of technology, then the requirements of that likely next-generation technology should be considered. To achieve these higher data rate solutions, OM1 62.5-um and OM2 50-um optical fibers will not be adequate. OM3 50-um optical fiber is expected to be the minimum optical fiber recognized for use with these upcoming, high-speed applications.
Extended-range OM3 fibers are available today, with manufacturer-specified distances of 500-plus meters for 10-GbE performance. Soon, these type fibers will be represented within the standards and have an OM4 designation. They will also support 40 GbE and 100 GbE; use of OM4 fiber may allow extended range and/or an increase in the number of connection points for high data-rate applications. If installing extended range OM3 fiber cable today, make sure that it will meet the expected requirements of the draft TIA-492AAAD, which outlines the requirements for OM4 optical fiber.
Getting the fiber count correct will also be critical in delaying or eliminating the need to pull in new backbone cabling every time a new application is available. High-speed applications will likely run over “parallel optics,” which is the process of breaking up a high-speed data stream over multiple fibers, sending them over the passive system, and recombining these signals at the end.
Minimum fiber requirement
Standards organizations are looking at various options utilizing multifiber push-on (MPO) array connectors. A likely scenario for 100-GbE transmission includes having 10 fibers act as a transmit channel, and another 10 fibers acting as the receive channel. For the system designer, this means that having 24 fibers going to many locations within the data center would be a minimum requirement to ensuring the capability to run parallel-optics applications in the future.
Approximately 70 percent of data center operators replace their cabling after only four years. Extending the expected life of the cabling out further makes the initial IT purchasing decision easier and would reduce repurchase costs over time. Installing the correct cabling system will reduce material disposal in the future, as well as limiting the hassles and costs associated with cable replacement.
When choosing copper and fiber media, the data center design also should consider the cost of the initial investment in electronics versus the longer-term costs due to heat generation and maintenance. A components supplier that understands both copper and fiber media will be able to help sort through these issues.
Deploying an intelligent infrastructure-management system can provide vision and control of the network for more-efficient utilization of energy, network assets and natural resources. An intelligent infrastructure provides a complete and instant knowledge of every available switch port in the network, allowing staff to minimize the number of switches deployed and lowering the overall power usage of the network.
Simple network-management protocol (SNMP) can be used to communicate with networked devices, such as temperature sensors, and can send alerts notifying of potential energy-consuming problems. Because it can identify each asset on the network in real time, staff can monitor and enforce asset shutdown policies during non-business hours to conserve energy.
There are many opportunities for reducing wasted materials and inefficient use of energy within the data center. Optimizing the passive system can be a big part of the effort in the “greening” of the data center. A more-efficient design will reduce environmental waste, provide a higher-performing system at a lower cost and produce a solution with a longer life span. HMT
Eric Leichter is a senior manager for CommScope enterprise global services, which provides design,
project management and other services.
| Page 2
| Page 3
| Page 4
| Page 5
| Page 6
| Page 7
| Page 8
| Page 9
| Page 10
| Page 11
| Page 12
| Page 13
| Page 14
| Page 15
| Page 16
| Page 17
| Page 18
| Page 19
| Page 20
| Page 21
| Page 22
| Page 23
| Page 24
| Page 25
| Page 26
| Page 27
| Page 28
| Page 29
| Page 30
| Page 31
| Page 32
| Page 33
| Page 34
| Page 35
| Page 36