-
Welcome
-
Subscribe to
Proposals
KVM I/O Performance on Large Storage Systems
This proposal has been rejected.
One Line Summary
An examination of KVM I/O performance in three large storage environments (direct-attached storage with multiple disk arrays, remote storage, and a typical storage cloud), identification of key performance issues and attributes in each of those environments, and our recent results in addressing those performance issues and optimizing I/O performance for KVM in those environments.
Abstract
In December 2006, Linus Torvalds announced that new versions of the Linux kernel would include the virtualization tool known as KVM (Kernel Virtual Machine Monitor). Since then, KVM has risen very quickly to prominence, thanks to the inherent advantages of kernel-based virtualization. As in any virtualized environment, the I/O performance represents one of the toughest challenges for KVM. Since the introduction of KVM in 2006, there has been much focus on the network performance, and the latest technologies, such as vhost-net and SR-IOV, have been implemented to improve the KVM network performance to near line speeds. By contrast, the KVM storage I/O performance has not been afforded such intense focus, especially on large storage systems.
In this presentation, we will examine the KVM block I/O performance in three large storage environments, using the open-source Flexible File System Benchmark (FFSB) to generate I/O workloads. Each of these storage environments has its own performance characteristics and bottlenecks, and thus, presents its own unique challenge to KVM in terms of storage I/O performance.
First, we will take a look at how well I/O operations to a large, direct-attached storage system with multiple disk arrays can be supported in a KVM guest. We will discuss how such I/O operations are handled by the KVM block I/O architecture, identify the performance issues that we encountered and discuss our recent results in addressing those issues to achieve more optimal throughput and lower virtual CPU usage in the KVM guest.
We will next consider a remote storage system environment where I/O operations in the KVM guest are supported over the Network File System (NFS) to a remote, large storage system. This remote storage is managed by a separate physical storage node, which is connected to the KVM host through a private 10-Gbps Ethernet network. We will discuss the performance issues we found in this environment and the available solutions to resolve those issues.
Finally, we will examine how multiple KVM guests can function as “interface nodes” to support I/O requests from multiple customers in a typical cloud environment. The storage stack in this cloud environment includes IBM General Parallel File System (GPFS) and network file systems such as NFS and CIFS (Common Internet File System). We will analyze the resource consumption of KVM guests, discuss the potential system performance bottlenecks, and demonstrate the performance and scalability advantages of KVM in this cloud environment.
In summary, we hope this presentation will give the audience a good sense of where KVM is in terms of I/O performance in several large storage environments, what the performance issues are, and our recent work in addressing those issues.
Tags
KVM, performance, I/O virtualization, block I/O, direct-attached storage, remote storage, cloud, NFS, CIFS, GPFS
Speakers
-
Stefan Hajnoczi
IBM Linux Technology CenterBiography
Stefan Hajnoczi joined the IBM Linux Technology Center in 2010 where he works on QEMU/KVM and Linux virtualization. He previously worked on cross-platform virtualization at Transitive before it was acquired in 2008.
Stefan is also involved in the Etherboot Project and works on the gPXE network bootloader, which provides an open source PXE implementation and takes network booting a step further with HTTP, iSCSI, and ATA-over-Ethernet support.
-
Khoa Huynh
IBMBiography
Khoa Huynh joined IBM in 1989 where he first worked on OS/2 in the areas of kernel memory management, remote client management, and system performance. He joined IBM’s Linux Technology Center in 2001 where he led the Linux defect support team working with major Linux distributors, and later, joined the virtualization development team, contributing to XEN’s full virtualization support for Windows. Khoa is currently on the Linux Performance team, focusing on the performance of virtualization technologies, such as the IBM PowerVM, XEN, and KVM. Khoa also holds a doctorate in Computer Science. His areas of interest include virtualization, system architectures, and cloud computing.
-
Biography
Andrew Theurer is a member of the Linux Technology Center, where he conducts performance analysis of Linux and related technologies. He currently focuses on KVM performance and scalability.