Login Form

About Me

RSS Feed

Mr. Backup Mr. Backup

10 Gigabit Ethernet is a LIE! (Updated twice)


They need to stop calling it that.  No one installing 10 GbE is getting 10 GbE performance, so why should they be allowed to call it that?

(Someone replied to my query, and there is now a new blog entry about what they said.) 

The first lie was GbE.  For years we couldn't get anything more than 50-60 MB/s.  Now you can get GbE performance by using jumbo frames and/or a TCP Offload Engine (TOE) card, but hey, at least you can get it.

Now let's talk about 10 GbE.  Go ahead: pay way too much for a 10 GbE NIC. Then install it and tell me what your throughput is.  The best I've ever heard of is just over 200 MB/s.  Lessee... 200 * 8 = 1600.  10 Gb = 10,000 Mb/s.  10,000...  1600... 

That would be a 625% exaggeration.  I think it's safe to call that a LIE.

The Intel NIC I pointed out isn't a TOE, you say?  Alright, buy the Chelsio 10 GbE TOE NIC.  They were BRAGGING that it can go as fast as 200 MB/s ! Oh, by the way, it's also really hard to get TOEs for anything but Windows.

So why do people think that GbE (and 10 GbE, 40 GbE, etc) is going to replace Fibre Channel?  At least with 2 Gb or 4 Gb FC, I actually get 2 or4 Gb.

I think the core problem is the IP protocol.  It's designed to send data across the world, not across a datacenter.  There's so much overhead in it that's needed when you're sending data across the world.  But that overhead isn't needed (and yet is still used) when sending data across a datacenter.

Fibre Channel, on the other hand, assumes the transport is local and that the frames are sent and received in order.  That makes it great for datacenter transmission, but bad for long distance transmission.   It's frame size is much larger than IP's, and it doesn't have to do all that work of putting frames back into order because they are sent and received in order.

Tell me I'm wrong!

Tell me you've put in a 10 GbE NIC and tested it and got anywhere near 10 Gb/s.  That would translate into 1200 MB/s, by the way.

Please tell me I'm wrong.  I really want to be.  (But I don't think I am.) 

Update: None of the people in this discussion on the NetBackup mailing list seem to think I'm wrong.  Also, I have attempted to contact the person at Neterion who left a comment, but I haven't heard back.  (I asked to talk to one of these customers that's getting 10 GbE speed with 10 GbE.)


0 #6 W. Curtis Preston 2011-02-21 04:19
@David Peterson

FWIW, real SSD throughput is far from what you're quoting. This test www.tomshardware.com/charts/ssd-charts-2010/Fresh-state-Throughput-Read-Average,2311.html shows that the real throughput of SSDs is less than half of that number.

And, yes, I know you have to be able to push it.

Finally, please note that this article was written almost three years ago. Things have changed a bit since then. ;)
0 #5 David Peterson 2011-02-20 14:03
10GbE is 1280 MB/s but you have to have drives that can saturate it, if you want to use the full 10GbE connection. It's not a lie, that its capable of 1280 MB/s SSD's are only capable of 500 MB/s at this point and that saturates 1 GbE which is only capable of 128 MB/s Sounds like someones hasn't raided high speed drives to be able to utilize the bandwidth of 10GbE.
+2 #4 Will 2008-12-17 15:30
I know many people getting 900MB/s through 10gig. A better title would be "I can't get 10GbE to go anywhere near rated speed".
0 #3 grammar 2007-08-13 18:03
Link aggregation and "10 GbE" are not, not, not the same thing.

I'm surprised at your performance on T2000s, though. That's really crappy overuse of the CPU. Is it really that bad? A shame. I thought those things were shiny the last time I played with them, but I was much more interested in their performance to tape and overall manageability (both of which are outstanding) than I was in their network performance.

In related "who needs an expensive 10 GbE card, we've got 802.3ad", I've been quite pleased with both the ease of configuration (a dream, as compared with Solaris 10's hackery) and performance of HP-UX's solution. It just works when you tell it to, it does aggregation and/or failover across N physical connections, and (like Sun's kit) plays nicely with however you'd like to set up your switches (including "totally ignorant, deal with the arp your own damn self").
0 #2 jfragoso 2007-08-13 07:02
I work with a company called Neterion who leads the market in 10GbE technology. We have many customers who achieve full line rate. I would love to send you some papers with real life applications. Just shoot me an e-mail.

0 #1 PaulT 2007-08-10 01:20
In most cases in the past, I'd wholly agree.

However, recent experience has shown link aggregation on Solaris 10 can achieve 3.45GB/s at ~20% cpu utilisation on T2000 servers. This configuration is aggregating 4 * onboard gigabit interfaces without using jumbo frames.

Achieving these kind of figures on previous Solaris versions typically hasn't been possible. With a fair amount of network tweaking, gigabit links can be made to perform near wire speed but any requirement over 1GB haven't delivered.

Moving forward, I'd anticipate the Niagara 2 based servers with 2 * 10Ge to deliver similarly impressive throughput results. If anyone's received their Niagara 2 servers yet, I'd be interested to see the results...

Sponsored Links