Scaling Linux Networking Beyond 10 Gigabit

This proposal has been rejected.


One Line Summary

An analysis of the current issues with scaling large networking workloads, from the perspective of the entire kernel.


10 Gigabit networking is a fast-growing market in today’s industry. Faster platforms, lower infrastructure costs, and a growing number of advanced features have driven this migration to higher bandwidth needs. However, as the platforms keep getting faster and keep growing in number of CPU cores and NUMA memory nodes, scaling networking becomes more and more challenging.

This presentation will illustrate the challenges facing high I/O networking. It will focus on what has been done in the past year to help scale, including better NUMA locality, cacheline-aligned structures, and more efficient interrupt handling. Looking to the future, the presentation will highlight ongoing areas for improvement, such as better interrupt affinity policies with MSI-X, better NUMA scalability, working with increased CPU counts, all which are necessary to efficiently drive speeds beyond 10 Gigabit.


kernel, intel, drivers, ethernet, networking


  • Dsc02220


    Peter (PJ) is a senior software engineer in the LAN Access Division at Intel. He is responsible for supporting the new technologies in Intel’s next generation LAN products both in silicon and updating the Linux kernel. He recently was the primary maintainer for the ixgbe driver, Intel’s 10 GbE PCI Express driver, and is now focused on 40 Gigabit Ethernet research. His contributions to the kernel include the original Tx multiqueue networking API, packet socket changes allowing channel bonding to work with layer 2 protocols on individual adapters, multiple fixes to the MSI-X interrupt layer, native Data Center Bridging support in the network stack, n-tuple filter offload support in ethtool, and interrupt core updates to allow drivers to manage MSI-X CPU affinity.

Leave a private comment to organizers about this proposal