Benefits of Deploying SFP+ Fiber vs. 10GBase-T
Bjorn Baera is Sr. Marketing Manager for Mellanox Technologies, and is responsible for products that are used for high-throughput, low-latency server and storage interconnect in data centers. You can find him on LinkedIn.BJORN BAERA
Dramatic growth in data center throughput has led to the increasing usage and demand for higher-performance servers, storage and interconnects. As a result, we are seeing the expansion of higher speed Ethernet solutions, specifically 10 and 40 gigabit Ethernet.
In particular to 10 gigabit Ethernet (10GbE), IT managers are now faced with the challenge of selecting the appropriate 10-gigabit physical media, as 10GbE is offered in two broad categories: optical and copper.
The main challenge IT managers face when selecting a new cable solution is the ability to support current and future data center deployments and trends::
- Many-core Servers – servers include more and more CPU cores to cope with data growth and application requirements.
- Virtualization – consolidating application workloads onto more
highly utilized servers using virtualization technologies such as VMware, Hyper-V, XEN and KVM.
- Storage Area Networks (SAN) – networked storage delivers services to multiple compute elements, both at block storage or file system.
- I/O Consolidation – usage of a single interconnect infrastructure for all communications needs: compute, storage and management.
- Data Center Network Aggregation – as the deployment of 10GbE increases, there is a need for higher speed switch uplinks for network aggregation in the data center (40GbE and above).
When planning a new cluster cabling, IT managers are faced with the challenge of future proofing their investment, as well as the predicting future application requirements.
Considerations for Cable Infrastructure
When planning a future cable infrastructure, it is important to make sure that the physical infrastructure will support future application needs, and future technology roadmaps. IT or the data center managers prefer to avoid installing multiple cable infrastructures for separate application traffic requirements – such as High Speed CPU-to-CPU communication using 40Gbps Infiniband and Storage and Internet connectivity using 10Gbps Ethernet.
IO consolidation, which is critical to reducing data center capital and operational costs, mandates that the cable infrastructure support new dynamics and challenges.
Comparing the 10GBase-T and SFP+ Options
Many IT managers are now evaluating the newly refreshed 10GBase-T technology, as the perception is that 10GBase-T is cheaper and easier to deploy than the alternative SFP+ technologies. The following table compares these two technologies:
As the adoption of private cloud applications increases, the need for low latency, large scale data centers is growing fast. Low latency is critical to ensuring fast response time and reducing CPU idle cycles; therefore, increasing data center efficiency and ROI.
When it comes to 10GBase-T, the PHY standard uses block encoding to transport data across the cable without errors. The block encoding requires a block of data to be read into the transmitter PHY, a mathematical function run on the data before the encoded data are sent over the link. The reverse happens on the receiver side. The standard specifies 2.6 microseconds for the transmit-receive pair, and the size of the block requires that latency to be less that 2 microseconds. SFP+ uses simplified electronics without encoding, and typical latency is around 300 nanoseconds (ns) per link.
Two microseconds may not seem high at first; however, if we imagine a TOR infrastructure where traffic is passing 4 hops to reach the destination, as much as 10.4usec delay is introduced when using 10GBase-T. This is a significant performance penalty compared to using 1.2usec introduced by the SFP+ DAC technology. For each technology, the latency of physical media must be added. In fiber or wire, the speed is roughly 5ns per meter.
The 10GBase-T delay becomes the same order of magnitude as Solid State Disk latency, and therefore dramatically delays data delivery by nearly 50 percent. High latencies in the data center infrastructure results in delays in CPU and application works, therefore limiting data center efficiency and increasing operational costs.
As power grid companies cap power supplies to data centers, IT managers have become sensitive to server power consumption. Data center managers aspire for the lowest possible power consumption technologies. It is important to note that for every watt of power consumed, typically two additional watts are needed for cooling.
10GBase-T components today require anywhere from 2 to 5 watts per port at each end of the cable –depending on the distance of the cable –while SFP+ requires approximately 0.7 watt, regardless of distance.
When deploying thousands of cables in a data center, huge power saving can be achieved by choosing SFP+ DAC and fiber technology.
When planning for new data center cable infrastructures, new dynamics must to be considered:
- Server, storage and interconnect virtualization
- Application mobility across server and storage data center
- Future proofing the data center cabling structure and applications
SFP+ Technology Ensures Optimal Performance and Lowest Latency
New dynamics within data centers mandate that the cable infrastructure handles latency sensitive applications anywhere. When comparing 10GBase-T technology with the alternative SFP+ technology, it is evident that SFP+ is the right technology to ensure optimal performance with lowest latency in the data center.
SFP+ technology lowers the power budget
SFP+ technology delivers far lower power usage than the 10GBase-T technology. The cost saving becomes obvious when deploying from 1000 to 10,000 cables in the data center.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
How about the costs of the SFP+ modules?
In looking at spec sheets for a current generation of 10GbE switches the power breakdown on the switch side I see is, for AC power supplies:
48-port switches across the board, line rate all ports layer 2+3 100% traffic load(the power difference between 100% and 30% is trivial).
10GbE w/o PHY SFP+: 2.29W/port
10GbE w/PHY SFP+: 3.54W/port
10GbE w/PHY 10GbaseT: 6.72W/port
(DC power is slightly less)
(taken total power draw of switch and divide by # of ports).
Looking at the SFP+ NICs in some of my servers they claim to have an average power draw of 17W. or 8.5W/port, since they are dual port.
Looking at an Intel 10GbaseT NIC it claims average power draw of 14.3W or 7.5W/port on a dual port NIC.
Both are “converged” that support multiple storage protocols over ethernet.
I see 10GbaseT taking over completely for within racks, and SFP+ using for back haul.
There’s no reason to traverse 4 hops to get to another system. Even if you do have 4 hops, the amount of latency your talking about for the most part is trivial for MOST applications (e.g. not things that are using RDMA and the like).
At least from a switch (and I believe from a server) perspective, the latest generation of 10GbaseT is quite a bit less expensive across the board than current generation SFP+ configurations when you take into account the cost of the SFP modules (or passive copper cabling even).
Previous generation 10GbaseT switches were quite costly, but the stuff that is based on the latest silicon out of the likes of Broadcom dropped the cost by roughly 50% over earlier generations from what I’ve seen anyway. Basically a 48-port 10GbaseT switch is about the same cost as a 48-port SFP+ 10GbE switch(w/PHY) before you even consider the cost of SFP+ modules or passive copper cabling.
Would be interested to know where you got your numbers from for power usage.
Key word being modern – basically stuff that came out in 2012.
Myself I’ve been waiting for 10GbaseT to come around, looks like it finally has – the only issue with it today is the ecosystem is still very limited as to the number of products out there that support it. – e.g. as far as I know I can’t go buy an HP server that has 10GbaseT with it right now.
storagemanPosted April 29th, 2013
Don’t you mean optical? It is confusing to refer to 10GbE optical as 10GbE fibre, as this confuses it with the FC protocol.
samboPosted May 5th, 2013
FC and infiband can ride over the same SFP+ infiniban dac. It’s similar to SAS all SFF-8470. If you want 20,30,40,50,60gbps they just add more plugs stacked
40gbe/sec — want to split it out? just get a 4:1 splitter cable.
If you are running 10gbe or greater you probably using huge mtu like 12000 which reduces the impact say against tiny-jumbo FCoE default 2500 mtu.
Average cost of 4-ports of 10gbase-T – $300 for a 12th generation LOM with 4 ports.
Remember 3 feet of 10gbase-T uses far less power now than 300 feet. 3 feet of monoprice cat6 is quite fine and stable at $4 a cable. DAC to DAC cable, passive, assuming both ends agree to use it (HP likes HP DAC cables, some oem cards like their own DAC only, this presents a problem if you have a DAC cable that is HP on both ends or Finisar on both ends). One end up will be link up, the other will be link down (unknown DAC).