Java Concurrency Made Easy

About motivation for the project Loom and some boring concurrent code as a bonus.


If you ever wrote a concurrent application you would agree that concurrency is not a piece of cake in Java. Although there are multiple tools to solve the problem, none is quite convenient for programmers.

This is about to change with the project Loom! When integrated into JDK we can finally write simple, even boring code that deals with concurrency in a way optimized for programmers convenience as well as for machine performance.

Simply put, Loom’s goal is to make it easier to write concurrent programs in the so-called synchronous style which is familiar and more comprehensible for humans than asynchronous programming.

Concurrency vs parallelism

Concurrency is about dealing with different tasks competing for resources while parallelism is about cooperating on a single task chopping it into pieces that can run in parallel.

The project Loom focuses on concurrency.

The problem with threads

Currently, Java threads are mapped one-to-one to operating system threads (OS threads). If you have a thread per connection in your serving application, threads are limiting the level of concurrency. That has a negative effect on throughput. We say that the application does not scale well.

Various techniques were invented to address this problem. You might be familiar with thread pools and CompletableFutures, a Java construct enabling asynchronous programming.

Unfortunately, the former approach has severe issues such as memory leaks of ThreadLocals and problematic cancellation. The latter is inconvenient for programmers.

Moreover, with thread pools, the number of threads can still be limiting when tasks are performing blocking operations (wait) because the thread is fully dedicated to the task and cannot be interrupted by the platform.

Asynchronous programming removed threads as a limiting factor, however, it makes code much more complicated, hard to read and debug.

Virtual threads

Some languages solve this problem with the async/await pattern which makes it possible to write asynchronous code in the synchronous style (JavaScript):

function wget_1(url) {
  fetch(url)
    .then(response => response.json())
    .then(data => console.log(data))
}

async function wget_2(url) {
  let response = await fetch(url)
  let data = await response.json()
  console.log(data)
}

Loom offers virtual threads that enable to structure concurrent code in a familiar and convenient way without actually having to learn any new concepts. With virtual threads, despite being synchronous, no thread is blocked. Virtual threads are preempted when they block on I/O or synchronization.

When programming in CompletableFutures you have to do all the thenApply, thenCompose, and thenRun just to avoid calling get. With virtual threads, you can simply call get as you do not have to deal with all the various stages of the CompletableFutures. This makes programming much simpler.

Virtual threads are multiplexed over a thread pool of a small number of OS threads. User code does not know anything about the scheduling that is happening under the hood.

Virtual threads vs user-mode threads vs fibers vs coroutines

All these terms mean basically the same thing: lightweight threads scheduled by the platform (JVM) rather than by the operating system. They are not mapped to the OS threads one-to-one which makes them cheap and performant.

While a standard computer can work efficiently with a maximum of thousands of threads, millions of virtual threads can be created with no significant overhead.

Different languages may use different names, but the idea is the same. In this text, we will stick with virtual threads as Loom calls them so.

Show me some code!

The best thing about the project Loom is that one does not actually have to learn (almost) anything new. In fact, it is more about unlearning stuff — several concurrency constructs will become obsolete although they are to work on in the same old way.

Download Loom early access binaries or build it from source code.

Both operating-system threads and virtual threads are java.lang.Thread. We can use our old friends Executors:

try (var executor = Executors
          .newVirtualThreadExecutor()) {

  res = executor.submit(() -> {
    var v1 = blockingOp1();
    var v2 = blockingOp2(v1);
    var v3 = blockingOp3(v2);
    return v3;
  })
  .get();
}

That’s it! Except for the factory method newVirtualThreadExecutor the code is practically the same you would write when working with threads without Loom.

Compare it with functionally equal code that uses CompletableFutures:

try (var executor = Executors
          .newCachedThreadPool()) {

  res = CompletableFuture
    .supplyAsync(::blockingOp1, executor)
    .thenApply(::blockingOp2)
    .thenApply(::blockingOp3)
    .get();
} catch (Exception e) {
  // ...
}

It is time to unlearn this exhausting kind of programming...

Limitations

At the time of writing, there are still some limitations for virtual threads:

These restraints are to be fixed when Loom is ready to be released as part of JDK.

Conclusion

The project Loom is one of the most exciting JDK projects.

Loom will remove a lot of the burden Java programmers have when dealing with concurrency. It will make it easy to program scalable concurrent applications in Java such as web or database servers.

Try it today and see for yourself.

Happy concurring!