Achieving Better Out-of-the-box Network Scaling

This proposal has been rejected.


One Line Summary

This is a discussion of how Linux needs to scale better out-of-the-box with large I/O network devices.


As servers continue to add more cores and more NUMA nodes, scaling large I/O network devices, such as 10 Gigabit and 40 Gigabit Ethernet, become more of a challenge.

Today, extensive hand-tuning and intricate driver hacks can be done to maximize performance with NUMA locality, interrupt alignment, and flow affinity. However, Linux needs to become more intelligent across these subsystems to scale better without requiring users to apply these extensive system tweaks.

This discussion will highlight work that has already started to solve these scaling issues. It will then outline what the community needs to focus on to improve scaling performance for the future.


kernel, intel, drivers, ethernet, networking


  • Dsc02220


    Peter (PJ) is a senior software engineer in the LAN Access Division at Intel. He is responsible for supporting the new technologies in Intel’s next generation LAN products both in silicon and updating the Linux kernel. He recently was the primary maintainer for the ixgbe driver, Intel’s 10 GbE PCI Express driver, and is now focused on 40 Gigabit Ethernet research. His contributions to the kernel include the original Tx multiqueue networking API, packet socket changes allowing channel bonding to work with layer 2 protocols on individual adapters, multiple fixes to the MSI-X interrupt layer, native Data Center Bridging support in the network stack, n-tuple filter offload support in ethtool, and interrupt core updates to allow drivers to manage MSI-X CPU affinity.

Leave a private comment to organizers about this proposal