Sun 03 June 2018

The billion dollar mistake, Scala edition


Today's blog post originates from an internal presentation at my workplace and describes the paper Java and Scala’s Type Systems are Unsound by Amin and Tate. Hence the presented ideas are not really my own and all the praise goes to the paper's authors. All mistakes are, of course, mine alone.

By the end of the post, you will see how to provoke a ClassCastException at runtime without ever using a cast (or Scala's equivalent of a cast, asInstanceOf). This post shows the Scala version of the paper, but it's worth to mention that the same can be done for Java's type system (as shown in the paper).

You can find a Jupyter notebook version of the blogpost here. It requires the Jupyter Scala kernel to run.


If S is a subtype of T, any term of type S can be safely used in a context where a term of type T is expected. This subtyping relation is often written as S <: T. The subtyping relation typically is also reflexive (A <: A) and transitive (A <: B and B <: C implies A <: C), making it a preorder.

Example: Integer <: Number <: Object

Subtyping is also a form of type polymorphism (a single interface to entities of different types), namely subtype polymorphism. Example: ArrayList <: List.

Path-dependent types

Scala has path-dependent types. They allow values to have individualized types associated with them. Note that the type is associated with the value, not with the value’s type!

For example, given the following trait:

trait Box {
    type Content
    def revealContent(): Content
defined trait Box

and the following two values:

val box1: Box = new Box {
    type Content = Int
    def revealContent(): Int = 42

val box2: Box = new Box {
    type Content = Int
    def revealContent(): Int = 21 + 21

// Note the different types and specifically that it's not Box.Content!
var content1: box1.Content = box1.revealContent()
val content2: box2.Content = box2.revealContent()
box1: Box = $sess.cmd1Wrapper$Helper$$anon$1@387d316
box2: Box = $sess.cmd1Wrapper$Helper$$anon$2@6cdc9e37
content1: box1.Content = 42
content2: box2.Content = 42

Then all of the following is not possible:

val c: Box.Content = box1.revealContent() not found: value Box
val c: Box.Content = box1.revealContent()

Compilation Failed
// Note the mix of box1 and box2!
val c2Prime: box1.Content = box2.revealContent() type mismatch;
 found   : cmd2Wrapper.this.cmd1.wrapper.box2.Content
 required: cmd2Wrapper.this.cmd1.wrapper.box1.Content
val c2Prime: box1.Content = box2.revealContent()

Compilation Failed

Using (witness) values to reason about code

It’s possible to use values as a proof that some other value has a certain property. For example, we can define a trait LowerBound[T] that reflects that a value of type T has a super class M.

// T is a subclass of M
trait LowerBound[T] {
    type M >: T
defined trait LowerBound

Now, with the help of that value, we can write an upcast function that casts T to M, without ever using a cast:

def upcast[T](lb: LowerBound[T], t: T): lb.M = t

// Proof that it works
val intLowerBound = new LowerBound[Integer] {
    type M = Number

val int42: Integer = 42
val intAsNumber: Number = upcast(intLowerBound, int42)
defined function upcast
intLowerBound: AnyRef with LowerBound[Integer]{type M = Number} = $sess.cmd3Wrapper$Helper$$anon$1@306ad96a
int42: Integer = 42
intAsNumber: Number = 42

Note that it works because we state the subtyping relation M >: T and Scala verifies that the relation holds. For example, trying to state that Integer is a lower bound of String doesn’t work:

val intWithStringAsLowerBound = new LowerBound[Integer] {
    type M = String
} overriding type M in trait LowerBound with bounds >: Integer;
 type M has incompatible type
    type M = String

Compilation Failed

Reasoning about nonsense

Now comes the fun part: reasoning about nonsense. First, we introduce a complementary trait UpperBound[U] that states that U is a subtype of M.

trait UpperBound[U] {
    type M <: U
defined trait UpperBound

In Scala, it’s possible for a value to implement multiple, traits, hence we can have a value of type LowerBound[T] with UpperBound[U] which states the subtype relation T <: M <: U (that’s the reason why we named the path-dependent type in both traits M, so we can express this relation).

Note that a type system always only helps so much. We made the type system argue for us about certain values, but the type system doesn’t hinder us from expressing complete nonsense. For example, the following compiles perfectly fine:

// We take a proof `bounded` that states that String <: M <: Integer and a value of
// bottom type String, and we will raise to the top and return an integer
def raiseToTheTop(bounded: LowerBound[String] with UpperBound[Integer], value: String): Integer = {
    // Subtle, but: the LowerBound[String] allowes the upcast (because String <: M)
    // On the other hand, the `UpperBound[Integer]` states that M <: Integer holds
    // as well and because Scala allows subtypes as return value, we are totally fine
    // returing the (intermediate) M as Integer!
    return upcast(bounded, value)
defined function raiseToTheTop

Of course nothing good can come from such a function. On the other hand, we can argue that while it’s a bit sad that the type system allows to express such a type, nothing bad can happen really happen. The function above only works because we have proof that the typing relation exists, via the bounded witness value. We can only call the function if we get hold of such a witness value. And we have seen above that it’s impossible to construct such a witness value, because Scala checks the typing relation expressed in the traits:

val proof = new LowerBound[String] with UpperBound[Integer] {
    type M = ??? // what should we put here?

The billion dollar mistake

Tony Hoare, the “inventor” of null, once called it his billion dollar mistake:

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing > the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

And, as you might have already guessed from the title, it haunts as again. Scala has the concept of implicit nulls, meaning that a null value can take any type. Unfortunately for us, it also means that it can take the nonsense type LowerBound[String] with UpperBound[Integer]:

val sadness: LowerBound[String] with UpperBound[Integer] = null

// Et voilà, watch the impossible being possible
raiseToTheTop(sadness, "and that is why we can't have nice things")
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer






A ClassCastException was thrown - and we didn’t even use a single cast in our code.

As a matter of fact, we can generalize our raiseToTheTop function to coerce an arbitrary type to any type we want:

def coerce[T, U](t: T): U = {
    val bounded: LowerBound[T] with UpperBound[U] = null
    return upcast(bounded, t)

// Same as before
coerce[String, Integer]("and that is why we can't have nice things")
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer





Tue 15 May 2018

Regex improvements in OpenJDK 9: Less exponential backtracking

I'm trying to keep the current blog post run going (at least compared to the last years) by blogging about random things I observe or learn. Today's post: I found out by coincidence that OpenJDK 9 changed a detail about how regular expressions are matched. The post is not about regular expressions in general, but regular expressions as implemented by OpenJDK's java.util.regex.Pattern.

Exponential backtracking

The post's title already spoils it: the change is related to backtracking. OpenJDK's Pattern implements quantifiers in regular expressions by trying out the different possibilities and it then backtracks in case there was no match. So for example, to see if a matches the pattern a?a, first the pattern aa is tried and because it doesn't match (not enough as in the input), the matching backtracks to the beginning and tries the pattern a next, which then succeeds.

Now let's see what happens if the input 1111a is matched against the pattern (1*)* in OpenJDK 8, like in the following code: Pattern.compile("(1*)*").matcher("1111a").matches()). In words: in a group, match zero or more 1s and repeat that group zero or more times.

In the beginning, the 1* inside the group matches greedily all characters 1 in the input until the a. Then the remaining input (the a) is used as input for the remaining part of the regex, which is the group repetition. The group is repeated again, but 1* still doesn't match the a and the matching fails because not all input is used. Note that the implementation is smart enough to not try to repeat the group forever with a zero-width match for 1*.

The implementation then remembers that in the beginning, it consumed all the 1s greedily. The question is: would the regex probably match the input if the first greedy matching doesn't consume all the 1s, but all except the last one? So the implementation backtracks to the point where it consumed all but the last 1 and then tries to match the remaining regex against the input 1a. This approach still doesn't result in a match. That means another backtrack and 11a is tried next. The big problem is now: that input 11a will be again first matched greedily by 1*, because remaining regex means the group repetition. And as there is still no match due to the trailing a, this matching as part of the backtracking will backtrack as well.

In other words, for every character 1 in the input, there are two choices: match and consume the character as part of the sub-expression 1* or don't match and continue with the remaining expression (the group repetition). That results in 2n different combinations that need to be tested, with n being the number of leading 1s in the input. There will never be a match, but that will only be known after every combination has been tested. So, exponential runtime.

This is not exactly big news. The phenomena of catastrophic backtracking is well understood and there are various ways how either the regular expression can be changed to avoid the backtracking (see for example The Explosive Quantifier Trap ) or how regexes can be implemented without backtracking (see for example Regular Expression Matching Can Be Simple And Fast). It is even known as an attack under the name ReDoS and has its own OWASP entry.

The change in OpenJDK 9

In OpenJDK, an optimization was added to avoid the exponential backtracking. Whenever a greedy repetition (*, +, curly braces, etc) doesn't match some input, the input's cursor position is memoized. Then, when the repetition is executed again during backtracking, it's checked whether it already failed to match for this cursor position and if so, the repetition isn't tested at all (as it will fail no match).

How does that help against exponential backtracking? Let's have a look again at the previous regex (\d*) that should match the input 1111a. First the greedy match of the 1s, then the failed attempt to match a and then the backtracking of the first greedy match. The first backtracking attempt is with the remaining input 1a. It doesn't match and it's memoized that this input failed. Then 11a is tried next. It also fails to match, but it also backtracks itself due to the first greedy match on the leading 11. During that backtracking, the inputs 1a and 11a need to be tested, but only the former is actually tested, due to the latter being memoized to have failed. Hence the backtracking is now linear instead of exponential.

Note that this optimization only works if the pattern doesn't use any backreferences.

More questions? Then study the source of OpenJDK's Pattern implementation!

Tue 01 May 2018

Dropwizard + Tenacity + k8s + Breakerbox

Today's post sketches out how one can make Breakerbox discover Dropwizard service instances inside a k8s cluster. Breakerbox is a dashboard and dynamic configuration tool for Tenacity. If you have never heard of Dropwizard or Tenacity or if you don't use Kubernetes, this post is probably very boring to you.

The first step is to instruct breakerbox to use k8s for instance discovery. This can be done by adding the following to your breakerbox YAML configuration file:

instanceDiscoveryClass: com.yammer.breakerbox.turbine.KubernetesInstanceDiscovery

Breakerbox's k8s instance discovery only discovers pods that are annotated with breakerbox-port. Which makes some sense, considering that not all your services might be using Dropwizard or that they might expose multiple ports. Hence in the specs of your Dropwizard service pods, add:

    breakerbox-port: "8080"

Note the quotes around the port, as k8s only allows string values for annotations.

If you have role-based access control in place in your k8s cluster, you might also want to add a breakerbox cluster role that contains all the permissions required to do the instance discovery. Only one permission is required and that is listing the pods:

kind: ClusterRole
  name: breakerbox
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list"]

Don't forget to add a corresponding ClusterRoleBinding as well, with a service account as subject. Then use the same service account in the pod spec of your breakerbox service.

In case something doesn't work, don't be afraid to look into the source of the k8s instance discovery. At the time of writing, it consists of merely 77 lines of code.

Sun 18 February 2018

Finding bugs in systems through formalization


Welcome to a first post in English! Today’s post is about how creating a formal specification for (parts of) a system can help find real-world bugs. What initially triggered this post was that I watched Hillel Wayne's Strange Loop talk "Tackling Concurrency Bugs with TLAplus". Having dealt with a concurrency bug at work the week before, I thought it would be the perfect opportunity to learn some TLA+ and to verify the existence of the bug with TLA+. That turned out to work so well that it was worth writing a blog post about it, hence here we are!

Overview of the problem at hand

The system in question is a system that processes various kinds of jobs. Processing one job involves executing different jobs, possibly in parallel. Each processing step publishes status updates. A supervisor, called processing handler, supervises the processing. Once the processing of all mandatory steps has finished, some additional action is triggered, for example updating some metric or informing other parts of the system about the completed processing.

While not all mandatory steps are completed, the job is said to be in a pending state. Once all the mandatory steps are complete, the job is in the completed state.

The issue is now that sometimes, the switch from pending to completed cannot be observed from within the progress handler.

The progress handler

First, let's have a closer look on the progress handler. It consists of the following steps:

  • Load the job from the database
  • Append the new status to the set of statuses
  • Load the job again from the database
  • Check if the new set is considered “processing is complete”. If that is in contrast to the previous set, emit a “processing is now complete” message.

How do we now find out if the design of the progress handler is sound? One possibility would be to think hard about it. Another possibility is to create a formal specification and then check the soundness through a model checker. This post will outline how one can do exactly that with the help of TLA+. Reasoning about concurrent systems is hard, but very easy to get wrong. With TLA+, we can make use of computers for what they are pretty good at: trying out a lot of different steps, fast, and without errors.

For some more motivation why using TLA+ is a good idea, see Murat Demirbas's blog post "Why you should use modeling".

A bit on TLA+

TLA+ is a formal specification language that can be used to design, model, document, and verify concurrent systems. The theoretical backgrounds of TLA+ are set theory and temporal logic.

This post hopefully gives enough background on TLA+ enough so readers without prior knowledge are able to understand the specification outlined in the post. Following are some additional resources for inclined readers who want to learn a bit more about TLA+:

Additionally, the presented specification will not be written in TLA+ directly but in PlusCal. PlusCal is a pseudocode-like language that translates to TLA+. It is probably a bit easier to read and write for programmers without a strong background in formal logic.


The term TLA+ will be used rather sloppy in this post. When it’s mentioned that “TLA+ does/check/…”, it might well be that it’s actually the model checker (TLC) which does it or that it’s a property of PlusCal.

Modelling the different parts of the system

The queue

The queue we want to model is a FIFO. Like every queue, it supports two operations: put for adding an element and take to pop an element from the queue. To keep the specification as simple as possible, we simply begin the model checking with all status messages that we want to have processed already enqueued. That leaves us with only needing to model the take operation.

EXTENDS Sequences, TLC

\* Receive a message from the queue
macro recv(queue, receiver)
  await queue # <<>>;
  receiver := Head(queue);
  queue := Tail(queue);
end macro

Hopefully, this doesn’t look too surprising. On the other hand, it’s the first PlusCal code in this post, hence it’s probably worth looking at it a bit closer in detail:

  • Line comments begin with \*
  • It defines the macro recv which takes two arguments. A macro will be expanded at the call site at compile time.
  • # means unequal (alternatively, /= can be used as well)
  • <<>> is an empty tuple
  • The macro consists of three different parts:
    • wait until the queue passed as argument is not empty
    • take the head
    • set the queue to the tail elements

Let’s see if this works! We define our queue queue with two messages, another variable msg where we will store the received messages, add a bit of required boilerplate and then receive the two messages:

EXTENDS Sequences, TLC

(* --algorithm queue
variable queue = <<"first message", "second message">>,
  msg = "";

\* Receive a message from the queue
macro recv(queue, receiver)
  await queue # <<>>;
  receiver := Head(queue);
  queue := Tail(queue);
end macro

  recv(queue, msg);
  print msg;
  recv(queue, msg);
  print msg;

end algorithm *)

print is of course a debug utility and nothing that would have a place in a real specification. If we now translate this PlusCal to TLA+ and execute it, the output will be first message and then second message. How you are actually able to execute this specification is out of scope for this post, but fortunately Leslie Lamport explains it in his video course, in the video "Resources and Tools". The specification can be found on Github, in case you want to toy around with it.

The real progress handler, the one we want to model, is of course more complex. First of all, it not only receives one message, it rather never stops to receive messages. There can also be more than one instance of it. Of course PlusCal also provides a way to model this, in the form of loops and processes.

process handler \in 1..2
variable msg = "";
  while TRUE do
    recv(queue, msg);
  end while
end process

Note that the loop: is not part of the begin but defines a label. Basically everything in one label happens at once and only between labels the model and its invariants will be checked. Process switches also happen between labels. TLA+ will choose an arbitrary process and execute one step, again and again, indefinitely.

If we now run this model, TLA+ will eventually produce an error: Deadlock reached. That is because TLA+ also checks for deadlocks, and eventually all our processes will wait for a new message to appear in the queue.


Now that we have successfully modelled the queue, let’s move on to Cassandra. Cassandra is used to persist the set of completed processing steps. In Cassandra, it's possible to specify a replication factor that tells Cassandra on how many nodes data should be replicated. If one writes data to only one node, Cassandra will replicate the data in the background to the number of other nodes specified in the replication factor. It means though that it's possible to not always read the latest data, for example in the case data is written to one node and then immediately read from another node and the data is not replicated yet. Cassandra also offers a consistency level for every query, where one can specify on how many nodes data needs to be written before a write query completes as successful (or, in the case of read query, from how many different nodes data needs to be read).

In the blog post's model, the background replication (in other words, the replication factor) is omitted and the consistency level is modelled by taking a set of nodes for the write operation.

procedure appendProgress(writeNodes, status)
variable nodes = writeNodes;
begin P0:
  while (nodes # {}) do
    with n \in nodes do
      progress[n] := progress[n] \union {status};
      nodes := nodes \ {n};
    end with
  end while;
end procedure

A procedure is similar to a macro, but it can can have labels, so a process switch in the middle of the execution of the procedure is possible, and it can have a return value. The with n \in nodes statement is executed by choosing any element out of nodes and then executing the statement’s body. This will be done for every possible execution of the statement, so for every possible element. That means that ultimately this procedure makes TLA+ check every possible combination of the order in which the progress is written to the individual nodes.

Modelling the read could be done in a similar fashion. In this specification, it’s simplified to the following:

\* Reads the progress set from the given nodes
ReadProgress(nodes) == UNION {progress[n] : n \in nodes}

What can be seen here is one of the pitfalls of extracting a system’s behaviour into a specification. The modelling of how Cassandra behaves is of course based on the the author’s understanding of how Cassandra behaves. If Cassandra behaves differently for whatever reason (maybe because the author’s understanding was plain wrong, or maybe because Cassandra might have a bug itself), then the specification will not reflect how the real system behaves. In this instance, it’s assumed that when reading a set and different nodes return different sets, Cassandra will merge the sets of all nodes into one resulting set.

The final progress handler

Having modelled the queue and Cassandra, there is one final missing part: the progress handler itself. As mentioned before, it executes the following steps:

  • Wait for a status queue message. That also increases the number of unacknowledged queue messages.
  • Load the job from the database
  • Append the new status and write it to the database
  • Load the job again and check if its overall status switched from pending to completed
  • Acknowledge the queue message (mark it as processed).

For consistency reasons, we instruct Cassandra to always read and write from a majority number of nodes before an operation is considered complete. We also consider the possibility that the read and write operations use a different set of nodes. To do so, another helper is introduced to give all subsets of nodes of a given size:

\* Returs a set with all subsets of nodes with the given cardinality
NNodes(n) == {x \in SUBSET Nodes : Cardinality(x) = n}

That helper can then be used to describe the variables in the process that describes the process handler:

\* Handles a progress message from the queue
fair process progressHandler \in {"handler1", "handler2"}
  writeQuorumNodes \in NNodes(Quorum),
  readQuorumNodes \in NNodes(Quorum),
  secondReadQuorumNodes \in NNodes(Quorum),
  completedBefore = FALSE,
  message = "";

Once again, TLA+ will check every possible combination of read and write nodes.

The remaining part of the progress handler is pretty straight forward:

begin P0:
  while TRUE do
    recv(queue, message);
    unacked := unacked + 1;
    completedBefore := ProcessingComplete(ReadProgress(readQuorumNodes));
    call appendProgress(writeQuorumNodes, message);
    if ~completedBefore /\ ProcessingComplete(ReadProgress(secondReadQuorumNodes)) then
      \* The real progress handler would trigger some action here
      switchHappened := switchHappened + 1;
    end if;
    unacked := unacked - 1;
  end while;
end process;

As a final step, an invariance called Correctness is added to the specification. TLA+ will check that the invariant holds after every step. One invariant that should hold at every time for the progress handler is that there are either still some messages to process (in other words, the queue is not empty), or that the handler is still in the act of processing a message (number of unacknowledged messages is not zero) or that the progress switch was observed by a handler:

Correctness == \/ queue # <<>>
               \/ unacked > 0
               \/ switchHappened > 0

With the complete specification now in place, the model can be checked. And it completes without error! The complete specification can be found on Github in case you want to check yourself.


The Correctness invariant only checks that the specification doesn’t allow an erroneous step. It doesn’t give any liveness guarantee, that is that the progress handler ever processes any messages at all. To also verify that, we can add a temporal operator to the specification, such as <>[]. The <>[] operator means that the predicate that follows it is expected to be true at some point and then stays true forever. Hence, to verify that our progress handler actually does what is expected, the following property can be added to the specification:

Liveness == <>[](switchHappened > 0)

Luckily, if the model is now executed, it still completes without any error.

The bug

The fact that the model execution completes without any error creates a dilemma: the switch from pending to completed is always observed, but the starting point of this post was that sometimes the switch isn’t observed. So either the specification doesn’t model one of the involved components such as Cassandra correctly or the implementation of progress handler doesn’t follow the specification. Which of the two possibilities is it?

By adding a bit of logging to the actual implementation and staring sharply at the logs, it can be observed that on the second read, the progress handler doesn’t read back a progress step it has already seen with the first read. That should not be possible if quorum reads and writes are used, hence a first guess would be that no quorums are used in the implementation. The specification can be used to demonstrate that the progress handler requires quorums. If any of the NNodes(Quorum) in the progressHandler process is changed to NNodes(1), executing the model will reveal errors.

The implementation uses Java with the Datastax Cassandra Driver and prepared statements. The statements are created as following:

Statement insert = QueryBuilder
    .insertInto(keyspace, columnFamily)
    // Omitted: binding expressions for the values here …

return session.prepare(insert.toString());

The bug is rather subtle, but for creating the prepared statement, the string representation of the created Statement object is used. Unfortunately, the string representation doesn’t include the Statement’s consistency level property! Changing the code to:

Statement insert = QueryBuilder
    .insertInto(keyspace, columnFamily)
    // Omitted: binding expressions for the values here …

return session

fixes the bug.

Proving more properties

Having a formal model for the system makes it also possible to check some more properties of it. For example, one might be interested in how many documents are processed, say for accounting purposes. The obvious place to add it is in the progress handler, when the switch from pending to completed was observed. If the switch is observed, increase a counter, done. We verified that the switch is guaranteed to be observed (if the document is processed), hence it should work fine. There is a caveat though: So far we only checked whether the switch was observed - what we didn't verify was that it is guaranteed that the switch is only observed once and not twice or more.

NoDupSwitch == switchHappened <= 1

Unfortunately, executing this specification will result in an error. It's not guaranteed that the switch is only observed once, hence using it for increasing a counter for accounting purposes might charge a customer more than once for a single document.

Closing words

I hope that I was able to demonstrate that TLA+ is a useful tool worth adding to your toolbox. One of its downsides, that it doesn't verify real code, is also one of its upsides: one can verify designs before even writing any code. Give it a try!

Thanks to Florian Mayer for reviewing a draft of this post. All mistakes are, of course, my own.