John Cairns bio photo

John Cairns

John is an engineer, architect and mentor who focuses on extremly performance sensitive and ultra high volume applications.

Email Twitter Github

Introducing Conversant Disruptor

Conversant Disruptor is an ultra-low-latency mechanism for communication between threads. It supports high performance network applications as well as processing offload operations. The Conversant Disruptor is ideal for any high performance application that depends on a Queue.

When I first heard of the LMAX Disruptor from a friend, I was immediately excited. The basic idea behind it is so fundamental and insightful that it completely resonated like a smartly struck gong.

However, when I looked at the code, I was a disappointed kid at Christmas. Forgive me, but here is an elegant thought couched in multiple layers of over-engineering. First, where was the essential queue that is so fundamental to the Disruptor concept? Admittedly the Disruptor is not intended to be used like a collection. Anyone who used the Disruptor to store more than one element at a time would be fundamentally misunderstanding the reason to use it. But a queue is also the best expression of the Disruptor concept, as a ring buffer.

In addition, the Disruptor itself was overshadowed by all the complicated choices required to get started: spin-locking, preallocation, shared events. Don’t make me support your domain model to use your code.

My initial goal in implementing a Disruptor was to express it in the simpler, more elegant, form of a Java BlockingQueue. It only took me a few hours, and a few days of tuning and testing, and years of revision and continued development. Eventually I had a much simpler queue-like data structure that supported the fundamental principles of the Disruptor without forcing me to use any one domain model.

The design principles for Conversant Disruptor are as follows:

  1. Simplicity: No extra code - no domain model
  2. Queue: fast intra-thread data transfer with only two methods
  3. Ring Buffer: atomicity guaranteed by iteration of a pair of sequence numbers
  4. Batch: efficiency of being able to drain the entire queue as a bulk operation with one CAS comparison

The Conversant Disruptor supports all these without introducing any additional requirements such as memory-preallocation, heavy setup and configuration, or complicated multiple shared reader scenarios.

The result is that the Conversant Disruptor is fast. Demonstrably so. The push-pull version is five nanoseconds, the multithread twenty. This inherent speed comes mainly from simplicity and mechanical sympathy. The real work of the Disruptor is done in only two methods. The queue offer and poll are all that are needed to build a high-speed multithreaded application. A remove method is provided to support the batch approach but that is a side consideration for most applications.

Is LMAX Disruptor the same as Conversant Disruptor? No. Is Conversant Disruptor some sort of fork of LMAX Disruptor? No. I don’t consider them even to be in competition with each other. They are fundamentally different in design and implementation, but they share the common theme of mechanical sympathy. For those that prefer the domain model provided by LMAX Disruptor it makes sense to use their implementation. For the common use case where one would prefer to use a Java BlockingQueue as a fundamental data structure at the heart of an application, Conversant Disruptor works best.

Conversant Disruptor drops in to work with Netty and can provide existing network applications order of magnitude speedups with a one line change. This was originally our use of the Disruptor at Conversant, but we have gone on to use it in logging, event sourcing and web serving.

For logging purposes, Conversant Disruptor is great for thread offloading. Most log implementations only require a single processor to handle the log output. This scenario works very well with Conversant Disruptor.

The Conversant Disruptor is a fast low-overhead mechanism for communicating between threads. Any Java application that depends on BlockingQueue or Queue strategies may immediately introduce Conversant Disruptor and realize the performance and memory benefits.

To try it out yourself, clone our repo on github.