Proposals

Scalable Concurrent Hash Tables via Relativistic Programming

*
Talk
lpc2009-0085
Scheduled: Thursday, September 24, 2009 from 3:10 – 3:55pm in Salon AB

Excerpt

I present a new algorithm for scalable concurrent hash tables, performance results for this algorithm (2-10x more scalable than Linux), and potential applications.

Description

Existing approaches to concurrent programming often fail to account for synchronization costs on modern shared-memory multiprocessor architectures. A new approach to concurrent programming, known as relativistic programming, can reduce or in some cases eliminate synchronization overhead on such architectures. This approach avoids the costs of inter-processor communication and memory access by permitting processors to operate from a relativistic view of memory provided by their own caches, rather than from an absolute reference frame of memory as seen by all processors. This research shows how relativistic programming techniques can provide the perceived advantages of optimistic synchronization without the useless parallelism caused by rollbacks and retries.

Descriptions of several implementations of a concurrent hash table illustrate the differences between traditional and relativistic approaches to concurrent programming. An analysis of the fundamental performance bottlenecks in existing concurrent programming techniques, both optimistic and pessimistic, directly motivates the key ideas of relativistic programming. Relativistic techniques provide performance and scalability advantages over traditional synchronization, demonstrated through benchmarks of concurrent hash tables implemented in the Linux kernel. The demonstrated relativistic hash table makes use of a novel relativistic hash table move operation.

Tags

concurrency, scalability, performance, hash tables, data structures, relativistic programming, RCU

Speaker

Leave a private comment to organizers about this proposal