Operators of large-scale data centers have been steadily increasing the transmission speeds of their Ethernet networks. While many organizations continue to rely upon 10Gbps (10G) technology, with some migrating to 40G, large-scale data centers are rapidly moving to 100G and beyond. Research firm IHS Infonetics has predicted that 100G will make up more than half of data center optical transceiver transmissions by 2019.
Traditionally, data centers have used link aggregation to increase throughput. Multiple 1G or 10G ports in a switch are “bundled” into a single logical connection with the aggregate bandwidth of the individual links. Various link aggregation control protocols (LACPs) are used for load balancing across the connections.
This works pretty well but there are significant drawbacks. First of all, load balancing is tricky. You have to ensure that all of the packets belonging to a particular session are sent across the same link or else the packets may get out of order. This can decrease efficiency and cause problems with some applications.
All of the links in an aggregation group must be the same (all 1G or 10G, for example) and configured the same way. You can only put up to eight links in a group, with many devices restricting you to a smaller number. For best results, the group should include an even number of links. However, even if you set up everything correctly, the load balancing isn’t very efficient.
Link aggregation also ties up a bunch of ports and increases the overall complexity of the switching infrastructure. By implementing 100G, data centers gain operational efficiencies and a more scalable environment to accommodate ever-increasing density and rapid growth.
The availability of more cost-efficient equipment has helped to spur the adoption of 100G. New transceivers have emerged that are less expensive and consume less power than previous generations. And now, complementary metal oxide semiconductor (CMOS) technology is being used for transceivers, enabling even faster transmission speeds while using less power. As a result, the cost-per-port of 100G has come down significantly in recent years, with some 100G solutions costing less per gigabit than comparable 10G and 40G products.
Of course, there’s no sign that network demands will slow down, so 100G is really just a stepping stone toward even faster connections. Hyperscale data center operators and service providers have already implemented 400G connections between their data centers, with 100G connections in the data center core and 25G to individual servers.
It’s only a matter of time before 100G enters the mainstream, so how do you prepare your network? Specifically, how do you build out your cabling infrastructure to facilitate migration to 100G?
With Base-8 MTP cable and modular patch panels from Enconnex, you gain the flexibility you need to futureproof your network. Base-8 has become the preferred choice for optical Ethernet connectivity, with a clear path from 40G to 100G to 400G. The Enconnex modular patch panels have slots that support cassettes with 10G, 40G or 100G ports, making it easy to swap out the port interface without altering the cabling infrastructure. This saves time, money and headaches as well as valuable real estate within your environment.
The move to 100G is inevitable, and now’s the time to plan your migration strategy. Let us show you how modular patch panels from Enconnex provide flexibility, operational efficiency and investment protection.