Sun 03 June 2018

The billion dollar mistake, Scala edition


Today's blog post originates from an internal presentation at my workplace and describes the paper Java and Scala’s Type Systems are Unsound by Amin and Tate. Hence the presented ideas are not really my own and all the praise goes to the paper's authors. All mistakes are, of course, mine alone.

By the end of the post, you will see how to provoke a ClassCastException at runtime without ever using a cast (or Scala's equivalent of a cast, asInstanceOf). This post shows the Scala version of the paper, but it's worth to mention that the same can be done for Java's type system (as shown in the paper).

You can find a Jupyter notebook version of the blogpost here. It requires the Jupyter Scala kernel to run.


If S is a subtype of T, any term of type S can be safely used in a context where a term of type T is expected. This subtyping relation is often written as S <: T. The subtyping relation typically is also reflexive (A <: A) and transitive (A <: B and B <: C implies A <: C), making it a preorder.

Example: Integer <: Number <: Object

Subtyping is also a form of type polymorphism (a single interface to entities of different types), namely subtype polymorphism. Example: ArrayList <: List.

Path-dependent types

Scala has path-dependent types. They allow values to have individualized types associated with them. Note that the type is associated with the value, not with the value’s type!

For example, given the following trait:

trait Box {
    type Content
    def revealContent(): Content
defined trait Box

and the following two values:

val box1: Box = new Box {
    type Content = Int
    def revealContent(): Int = 42

val box2: Box = new Box {
    type Content = Int
    def revealContent(): Int = 21 + 21

// Note the different types and specifically that it's not Box.Content!
var content1: box1.Content = box1.revealContent()
val content2: box2.Content = box2.revealContent()
box1: Box = $sess.cmd1Wrapper$Helper$$anon$1@387d316
box2: Box = $sess.cmd1Wrapper$Helper$$anon$2@6cdc9e37
content1: box1.Content = 42
content2: box2.Content = 42

Then all of the following is not possible:

val c: Box.Content = box1.revealContent() not found: value Box
val c: Box.Content = box1.revealContent()

Compilation Failed
// Note the mix of box1 and box2!
val c2Prime: box1.Content = box2.revealContent() type mismatch;
 found   : cmd2Wrapper.this.cmd1.wrapper.box2.Content
 required: cmd2Wrapper.this.cmd1.wrapper.box1.Content
val c2Prime: box1.Content = box2.revealContent()

Compilation Failed

Using (witness) values to reason about code

It’s possible to use values as a proof that some other value has a certain property. For example, we can define a trait LowerBound[T] that reflects that a value of type T has a super class M.

// T is a subclass of M
trait LowerBound[T] {
    type M >: T
defined trait LowerBound

Now, with the help of that value, we can write an upcast function that casts T to M, without ever using a cast:

def upcast[T](lb: LowerBound[T], t: T): lb.M = t

// Proof that it works
val intLowerBound = new LowerBound[Integer] {
    type M = Number

val int42: Integer = 42
val intAsNumber: Number = upcast(intLowerBound, int42)
defined function upcast
intLowerBound: AnyRef with LowerBound[Integer]{type M = Number} = $sess.cmd3Wrapper$Helper$$anon$1@306ad96a
int42: Integer = 42
intAsNumber: Number = 42

Note that it works because we state the subtyping relation M >: T and Scala verifies that the relation holds. For example, trying to state that Integer is a lower bound of String doesn’t work:

val intWithStringAsLowerBound = new LowerBound[Integer] {
    type M = String
} overriding type M in trait LowerBound with bounds >: Integer;
 type M has incompatible type
    type M = String

Compilation Failed

Reasoning about nonsense

Now comes the fun part: reasoning about nonsense. First, we introduce a complementary trait UpperBound[U] that states that U is a subtype of M.

trait UpperBound[U] {
    type M <: U
defined trait UpperBound

In Scala, it’s possible for a value to implement multiple, traits, hence we can have a value of type LowerBound[T] with UpperBound[U] which states the subtype relation T <: M <: U (that’s the reason why we named the path-dependent type in both traits M, so we can express this relation).

Note that a type system always only helps so much. We made the type system argue for us about certain values, but the type system doesn’t hinder us from expressing complete nonsense. For example, the following compiles perfectly fine:

// We take a proof `bounded` that states that String <: M <: Integer and a value of
// bottom type String, and we will raise to the top and return an integer
def raiseToTheTop(bounded: LowerBound[String] with UpperBound[Integer], value: String): Integer = {
    // Subtle, but: the LowerBound[String] allowes the upcast (because String <: M)
    // On the other hand, the `UpperBound[Integer]` states that M <: Integer holds
    // as well and because Scala allows subtypes as return value, we are totally fine
    // returing the (intermediate) M as Integer!
    return upcast(bounded, value)
defined function raiseToTheTop

Of course nothing good can come from such a function. On the other hand, we can argue that while it’s a bit sad that the type system allows to express such a type, nothing bad can happen really happen. The function above only works because we have proof that the typing relation exists, via the bounded witness value. We can only call the function if we get hold of such a witness value. And we have seen above that it’s impossible to construct such a witness value, because Scala checks the typing relation expressed in the traits:

val proof = new LowerBound[String] with UpperBound[Integer] {
    type M = ??? // what should we put here?

The billion dollar mistake

Tony Hoare, the “inventor” of null, once called it his billion dollar mistake:

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing > the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

And, as you might have already guessed from the title, it haunts as again. Scala has the concept of implicit nulls, meaning that a null value can take any type. Unfortunately for us, it also means that it can take the nonsense type LowerBound[String] with UpperBound[Integer]:

val sadness: LowerBound[String] with UpperBound[Integer] = null

// Et voilà, watch the impossible being possible
raiseToTheTop(sadness, "and that is why we can't have nice things")
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer






A ClassCastException was thrown - and we didn’t even use a single cast in our code.

As a matter of fact, we can generalize our raiseToTheTop function to coerce an arbitrary type to any type we want:

def coerce[T, U](t: T): U = {
    val bounded: LowerBound[T] with UpperBound[U] = null
    return upcast(bounded, t)

// Same as before
coerce[String, Integer]("and that is why we can't have nice things")
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer





Tue 15 May 2018

Regex improvements in OpenJDK 9: Less exponential backtracking

I'm trying to keep the current blog post run going (at least compared to the last years) by blogging about random things I observe or learn. Today's post: I found out by coincidence that OpenJDK 9 changed a detail about how regular expressions are matched. The post is not about regular expressions in general, but regular expressions as implemented by OpenJDK's java.util.regex.Pattern.

Exponential backtracking

The post's title already spoils it: the change is related to backtracking. OpenJDK's Pattern implements quantifiers in regular expressions by trying out the different possibilities and it then backtracks in case there was no match. So for example, to see if a matches the pattern a?a, first the pattern aa is tried and because it doesn't match (not enough as in the input), the matching backtracks to the beginning and tries the pattern a next, which then succeeds.

Now let's see what happens if the input 1111a is matched against the pattern (1*)* in OpenJDK 8, like in the following code: Pattern.compile("(1*)*").matcher("1111a").matches()). In words: in a group, match zero or more 1s and repeat that group zero or more times.

In the beginning, the 1* inside the group matches greedily all characters 1 in the input until the a. Then the remaining input (the a) is used as input for the remaining part of the regex, which is the group repetition. The group is repeated again, but 1* still doesn't match the a and the matching fails because not all input is used. Note that the implementation is smart enough to not try to repeat the group forever with a zero-width match for 1*.

The implementation then remembers that in the beginning, it consumed all the 1s greedily. The question is: would the regex probably match the input if the first greedy matching doesn't consume all the 1s, but all except the last one? So the implementation backtracks to the point where it consumed all but the last 1 and then tries to match the remaining regex against the input 1a. This approach still doesn't result in a match. That means another backtrack and 11a is tried next. The big problem is now: that input 11a will be again first matched greedily by 1*, because remaining regex means the group repetition. And as there is still no match due to the trailing a, this matching as part of the backtracking will backtrack as well.

In other words, for every character 1 in the input, there are two choices: match and consume the character as part of the sub-expression 1* or don't match and continue with the remaining expression (the group repetition). That results in 2n different combinations that need to be tested, with n being the number of leading 1s in the input. There will never be a match, but that will only be known after every combination has been tested. So, exponential runtime.

This is not exactly big news. The phenomena of catastrophic backtracking is well understood and there are various ways how either the regular expression can be changed to avoid the backtracking (see for example The Explosive Quantifier Trap ) or how regexes can be implemented without backtracking (see for example Regular Expression Matching Can Be Simple And Fast). It is even known as an attack under the name ReDoS and has its own OWASP entry.

The change in OpenJDK 9

In OpenJDK, an optimization was added to avoid the exponential backtracking. Whenever a greedy repetition (*, +, curly braces, etc) doesn't match some input, the input's cursor position is memoized. Then, when the repetition is executed again during backtracking, it's checked whether it already failed to match for this cursor position and if so, the repetition isn't tested at all (as it will fail no match).

How does that help against exponential backtracking? Let's have a look again at the previous regex (\d*) that should match the input 1111a. First the greedy match of the 1s, then the failed attempt to match a and then the backtracking of the first greedy match. The first backtracking attempt is with the remaining input 1a. It doesn't match and it's memoized that this input failed. Then 11a is tried next. It also fails to match, but it also backtracks itself due to the first greedy match on the leading 11. During that backtracking, the inputs 1a and 11a need to be tested, but only the former is actually tested, due to the latter being memoized to have failed. Hence the backtracking is now linear instead of exponential.

Note that this optimization only works if the pattern doesn't use any backreferences.

More questions? Then study the source of OpenJDK's Pattern implementation!

Tue 01 May 2018

Dropwizard + Tenacity + k8s + Breakerbox

Today's post sketches out how one can make Breakerbox discover Dropwizard service instances inside a k8s cluster. Breakerbox is a dashboard and dynamic configuration tool for Tenacity. If you have never heard of Dropwizard or Tenacity or if you don't use Kubernetes, this post is probably very boring to you.

The first step is to instruct breakerbox to use k8s for instance discovery. This can be done by adding the following to your breakerbox YAML configuration file:

instanceDiscoveryClass: com.yammer.breakerbox.turbine.KubernetesInstanceDiscovery

Breakerbox's k8s instance discovery only discovers pods that are annotated with breakerbox-port. Which makes some sense, considering that not all your services might be using Dropwizard or that they might expose multiple ports. Hence in the specs of your Dropwizard service pods, add:

    breakerbox-port: "8080"

Note the quotes around the port, as k8s only allows string values for annotations.

If you have role-based access control in place in your k8s cluster, you might also want to add a breakerbox cluster role that contains all the permissions required to do the instance discovery. Only one permission is required and that is listing the pods:

kind: ClusterRole
  name: breakerbox
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list"]

Don't forget to add a corresponding ClusterRoleBinding as well, with a service account as subject. Then use the same service account in the pod spec of your breakerbox service.

In case something doesn't work, don't be afraid to look into the source of the k8s instance discovery. At the time of writing, it consists of merely 77 lines of code.

Sun 18 February 2018

Finding bugs in systems through formalization


Welcome to a first post in English! Today’s post is about how creating a formal specification for (parts of) a system can help find real-world bugs. What initially triggered this post was that I watched Hillel Wayne's Strange Loop talk "Tackling Concurrency Bugs with TLAplus". Having dealt with a concurrency bug at work the week before, I thought it would be the perfect opportunity to learn some TLA+ and to verify the existence of the bug with TLA+. That turned out to work so well that it was worth writing a blog post about it, hence here we are!

Overview of the problem at hand

The system in question is a system that processes various kinds of jobs. Processing one job involves executing different jobs, possibly in parallel. Each processing step publishes status updates. A supervisor, called processing handler, supervises the processing. Once the processing of all mandatory steps has finished, some additional action is triggered, for example updating some metric or informing other parts of the system about the completed processing.

While not all mandatory steps are completed, the job is said to be in a pending state. Once all the mandatory steps are complete, the job is in the completed state.

The issue is now that sometimes, the switch from pending to completed cannot be observed from within the progress handler.

The progress handler

First, let's have a closer look on the progress handler. It consists of the following steps:

  • Load the job from the database
  • Append the new status to the set of statuses
  • Load the job again from the database
  • Check if the new set is considered “processing is complete”. If that is in contrast to the previous set, emit a “processing is now complete” message.

How do we now find out if the design of the progress handler is sound? One possibility would be to think hard about it. Another possibility is to create a formal specification and then check the soundness through a model checker. This post will outline how one can do exactly that with the help of TLA+. Reasoning about concurrent systems is hard, but very easy to get wrong. With TLA+, we can make use of computers for what they are pretty good at: trying out a lot of different steps, fast, and without errors.

For some more motivation why using TLA+ is a good idea, see Murat Demirbas's blog post "Why you should use modeling".

A bit on TLA+

TLA+ is a formal specification language that can be used to design, model, document, and verify concurrent systems. The theoretical backgrounds of TLA+ are set theory and temporal logic.

This post hopefully gives enough background on TLA+ enough so readers without prior knowledge are able to understand the specification outlined in the post. Following are some additional resources for inclined readers who want to learn a bit more about TLA+:

Additionally, the presented specification will not be written in TLA+ directly but in PlusCal. PlusCal is a pseudocode-like language that translates to TLA+. It is probably a bit easier to read and write for programmers without a strong background in formal logic.


The term TLA+ will be used rather sloppy in this post. When it’s mentioned that “TLA+ does/check/…”, it might well be that it’s actually the model checker (TLC) which does it or that it’s a property of PlusCal.

Modelling the different parts of the system

The queue

The queue we want to model is a FIFO. Like every queue, it supports two operations: put for adding an element and take to pop an element from the queue. To keep the specification as simple as possible, we simply begin the model checking with all status messages that we want to have processed already enqueued. That leaves us with only needing to model the take operation.

EXTENDS Sequences, TLC

\* Receive a message from the queue
macro recv(queue, receiver)
  await queue # <<>>;
  receiver := Head(queue);
  queue := Tail(queue);
end macro

Hopefully, this doesn’t look too surprising. On the other hand, it’s the first PlusCal code in this post, hence it’s probably worth looking at it a bit closer in detail:

  • Line comments begin with \*
  • It defines the macro recv which takes two arguments. A macro will be expanded at the call site at compile time.
  • # means unequal (alternatively, /= can be used as well)
  • <<>> is an empty tuple
  • The macro consists of three different parts:
    • wait until the queue passed as argument is not empty
    • take the head
    • set the queue to the tail elements

Let’s see if this works! We define our queue queue with two messages, another variable msg where we will store the received messages, add a bit of required boilerplate and then receive the two messages:

EXTENDS Sequences, TLC

(* --algorithm queue
variable queue = <<"first message", "second message">>,
  msg = "";

\* Receive a message from the queue
macro recv(queue, receiver)
  await queue # <<>>;
  receiver := Head(queue);
  queue := Tail(queue);
end macro

  recv(queue, msg);
  print msg;
  recv(queue, msg);
  print msg;

end algorithm *)

print is of course a debug utility and nothing that would have a place in a real specification. If we now translate this PlusCal to TLA+ and execute it, the output will be first message and then second message. How you are actually able to execute this specification is out of scope for this post, but fortunately Leslie Lamport explains it in his video course, in the video "Resources and Tools". The specification can be found on Github, in case you want to toy around with it.

The real progress handler, the one we want to model, is of course more complex. First of all, it not only receives one message, it rather never stops to receive messages. There can also be more than one instance of it. Of course PlusCal also provides a way to model this, in the form of loops and processes.

process handler \in 1..2
variable msg = "";
  while TRUE do
    recv(queue, msg);
  end while
end process

Note that the loop: is not part of the begin but defines a label. Basically everything in one label happens at once and only between labels the model and its invariants will be checked. Process switches also happen between labels. TLA+ will choose an arbitrary process and execute one step, again and again, indefinitely.

If we now run this model, TLA+ will eventually produce an error: Deadlock reached. That is because TLA+ also checks for deadlocks, and eventually all our processes will wait for a new message to appear in the queue.


Now that we have successfully modelled the queue, let’s move on to Cassandra. Cassandra is used to persist the set of completed processing steps. In Cassandra, it's possible to specify a replication factor that tells Cassandra on how many nodes data should be replicated. If one writes data to only one node, Cassandra will replicate the data in the background to the number of other nodes specified in the replication factor. It means though that it's possible to not always read the latest data, for example in the case data is written to one node and then immediately read from another node and the data is not replicated yet. Cassandra also offers a consistency level for every query, where one can specify on how many nodes data needs to be written before a write query completes as successful (or, in the case of read query, from how many different nodes data needs to be read).

In the blog post's model, the background replication (in other words, the replication factor) is omitted and the consistency level is modelled by taking a set of nodes for the write operation.

procedure appendProgress(writeNodes, status)
variable nodes = writeNodes;
begin P0:
  while (nodes # {}) do
    with n \in nodes do
      progress[n] := progress[n] \union {status};
      nodes := nodes \ {n};
    end with
  end while;
end procedure

A procedure is similar to a macro, but it can can have labels, so a process switch in the middle of the execution of the procedure is possible, and it can have a return value. The with n \in nodes statement is executed by choosing any element out of nodes and then executing the statement’s body. This will be done for every possible execution of the statement, so for every possible element. That means that ultimately this procedure makes TLA+ check every possible combination of the order in which the progress is written to the individual nodes.

Modelling the read could be done in a similar fashion. In this specification, it’s simplified to the following:

\* Reads the progress set from the given nodes
ReadProgress(nodes) == UNION {progress[n] : n \in nodes}

What can be seen here is one of the pitfalls of extracting a system’s behaviour into a specification. The modelling of how Cassandra behaves is of course based on the the author’s understanding of how Cassandra behaves. If Cassandra behaves differently for whatever reason (maybe because the author’s understanding was plain wrong, or maybe because Cassandra might have a bug itself), then the specification will not reflect how the real system behaves. In this instance, it’s assumed that when reading a set and different nodes return different sets, Cassandra will merge the sets of all nodes into one resulting set.

The final progress handler

Having modelled the queue and Cassandra, there is one final missing part: the progress handler itself. As mentioned before, it executes the following steps:

  • Wait for a status queue message. That also increases the number of unacknowledged queue messages.
  • Load the job from the database
  • Append the new status and write it to the database
  • Load the job again and check if its overall status switched from pending to completed
  • Acknowledge the queue message (mark it as processed).

For consistency reasons, we instruct Cassandra to always read and write from a majority number of nodes before an operation is considered complete. We also consider the possibility that the read and write operations use a different set of nodes. To do so, another helper is introduced to give all subsets of nodes of a given size:

\* Returs a set with all subsets of nodes with the given cardinality
NNodes(n) == {x \in SUBSET Nodes : Cardinality(x) = n}

That helper can then be used to describe the variables in the process that describes the process handler:

\* Handles a progress message from the queue
fair process progressHandler \in {"handler1", "handler2"}
  writeQuorumNodes \in NNodes(Quorum),
  readQuorumNodes \in NNodes(Quorum),
  secondReadQuorumNodes \in NNodes(Quorum),
  completedBefore = FALSE,
  message = "";

Once again, TLA+ will check every possible combination of read and write nodes.

The remaining part of the progress handler is pretty straight forward:

begin P0:
  while TRUE do
    recv(queue, message);
    unacked := unacked + 1;
    completedBefore := ProcessingComplete(ReadProgress(readQuorumNodes));
    call appendProgress(writeQuorumNodes, message);
    if ~completedBefore /\ ProcessingComplete(ReadProgress(secondReadQuorumNodes)) then
      \* The real progress handler would trigger some action here
      switchHappened := switchHappened + 1;
    end if;
    unacked := unacked - 1;
  end while;
end process;

As a final step, an invariance called Correctness is added to the specification. TLA+ will check that the invariant holds after every step. One invariant that should hold at every time for the progress handler is that there are either still some messages to process (in other words, the queue is not empty), or that the handler is still in the act of processing a message (number of unacknowledged messages is not zero) or that the progress switch was observed by a handler:

Correctness == \/ queue # <<>>
               \/ unacked > 0
               \/ switchHappened > 0

With the complete specification now in place, the model can be checked. And it completes without error! The complete specification can be found on Github in case you want to check yourself.


The Correctness invariant only checks that the specification doesn’t allow an erroneous step. It doesn’t give any liveness guarantee, that is that the progress handler ever processes any messages at all. To also verify that, we can add a temporal operator to the specification, such as <>[]. The <>[] operator means that the predicate that follows it is expected to be true at some point and then stays true forever. Hence, to verify that our progress handler actually does what is expected, the following property can be added to the specification:

Liveness == <>[](switchHappened > 0)

Luckily, if the model is now executed, it still completes without any error.

The bug

The fact that the model execution completes without any error creates a dilemma: the switch from pending to completed is always observed, but the starting point of this post was that sometimes the switch isn’t observed. So either the specification doesn’t model one of the involved components such as Cassandra correctly or the implementation of progress handler doesn’t follow the specification. Which of the two possibilities is it?

By adding a bit of logging to the actual implementation and staring sharply at the logs, it can be observed that on the second read, the progress handler doesn’t read back a progress step it has already seen with the first read. That should not be possible if quorum reads and writes are used, hence a first guess would be that no quorums are used in the implementation. The specification can be used to demonstrate that the progress handler requires quorums. If any of the NNodes(Quorum) in the progressHandler process is changed to NNodes(1), executing the model will reveal errors.

The implementation uses Java with the Datastax Cassandra Driver and prepared statements. The statements are created as following:

Statement insert = QueryBuilder
    .insertInto(keyspace, columnFamily)
    // Omitted: binding expressions for the values here …

return session.prepare(insert.toString());

The bug is rather subtle, but for creating the prepared statement, the string representation of the created Statement object is used. Unfortunately, the string representation doesn’t include the Statement’s consistency level property! Changing the code to:

Statement insert = QueryBuilder
    .insertInto(keyspace, columnFamily)
    // Omitted: binding expressions for the values here …

return session

fixes the bug.

Proving more properties

Having a formal model for the system makes it also possible to check some more properties of it. For example, one might be interested in how many documents are processed, say for accounting purposes. The obvious place to add it is in the progress handler, when the switch from pending to completed was observed. If the switch is observed, increase a counter, done. We verified that the switch is guaranteed to be observed (if the document is processed), hence it should work fine. There is a caveat though: So far we only checked whether the switch was observed - what we didn't verify was that it is guaranteed that the switch is only observed once and not twice or more.

NoDupSwitch == switchHappened <= 1

Unfortunately, executing this specification will result in an error. It's not guaranteed that the switch is only observed once, hence using it for increasing a counter for accounting purposes might charge a customer more than once for a single document.

Closing words

I hope that I was able to demonstrate that TLA+ is a useful tool worth adding to your toolbox. One of its downsides, that it doesn't verify real code, is also one of its upsides: one can verify designs before even writing any code. Give it a try!

Thanks to Florian Mayer for reviewing a draft of this post. All mistakes are, of course, my own.

Tue 30 January 2018

Unerwartet: def vs. val

Nach einem Jahr der Blog-Abstinenz heute zur Erfrischung mal ein wenig Scala. Nehmen wir an, wir haben die folgende login-Funktion, die REST-assured nutzt, um HTTP-Requests für einen Login zu tätigen. Zurückgegeben wird ein Tupel mit einem Access-Token und wie lange selbiges gültig ist.

def login(credentials: Credentials): (String, Int) = {
  def json = given()
    // Skipped: add the credentials somehow to the request

  (json.getString("access_token"), json.getInt("expires_in"))

Manchmal gibt die Methode ein Access-Token zurück, das nicht gültig ist, obwohl es laut der ebenfalls zurückgegebenen Gültigkeitsdauer noch lange gültig sein sollte. Warum?

Sehr subtil, aber das def json = … sollte natürlich ein val json = … sein. Ansonsten findet bei der Erstellung des Rückgabe-Tupel zwei mal ein Aufruf der Funktion json statt: einmal bei der Auswertung von json.getString("access_token") und das andere Mal bei der Auswertung von json.getInt("expires_in"). Beides mal wird natürlich auch ein frischer HTTP-Request abgesetzt. Es konnte daher passieren, dass für das Access-Token ein Token zurückgegeben wurde, das gerade am ablaufen ist, und für den expires_in-Wert wurde bereits ein neues Token ausgestellt. Daher sieht es so aus, als wäre das Token noch lange gültig, dabei ist es bereits abgelaufen.

Sun 10 January 2016

Typsicheres verlinken mit Java 8

Im heutigen Blog-Post geht es um Webrahmenwerke. Genauer: es geht darum, wie man auf typsichere Art und Weise Links zu Controllern erzeugt. Typsicher in dem Sinne, dass man Variablen im Routing nur durch Werte des selben Typs ersetzen kann, den der Controller auch tatsächlich erwartet. Übergibt man die falsche Anzahl an Werten oder Werte des falschen Typs, soll der Code erst gar nicht kompilieren – in anderen Worten: Es sollen nicht erst Exceptions zur Laufzeit geworfen werden, sondern bereits beim kompilieren. Für das Beispiel beschränken wir uns auf Pfad-Variablen.

Die Code-Beispiele für die Controller sind dabei lose an JAX-RS orientiert. Es werden die folgenden Konventionen benutzt:

  • Methoden, die Requests behandeln, haben lediglich die Pfad-Variablen als Parameter. Die Reihenfolge der Parameter ist dabei die Reihenfolge, in der sie im Routing-Template definiert sind.


In diesem Blog-Beitrag wird nur die Methode vorgestellt, mit der man Links erstellen kann. Ein Router, der Requests auf Controller dispatcht, wird nicht vorgestellt.

Außerdem wird eine gewisse Vertrautheit mit Java 8 (insbesondere Lambdas) vorausgesetzt sowie auch ein wenig mit JAX-RS. Für eine Intro zu Lambdas siehe etwa den Lambda Quick Start

Prior Art

Da Links erzeugen ein Recht häufiges Problem ist, gibt es natürlich so einiges an Prior Art. Wahrscheinlich in so ziemlich jedem Webrahmenwerk, in den unterschiedlichsten Ausprägungen. Ich habe stellvertretend einmal drei herausgegriffen:

  • Pyramids route_url

    Zeigt schon mal die grobe Idee, wenn auch in Python und nicht in Java. Gewisse Fehler werden abgefangen (falsche Anzahl an Argumenten zum Beispiel). Allerdings passiert alles zur Laufzeit. Man braucht also eine recht gut ausgeprägte Test-Suite, um Fehler beim Linken zu erkennen.

  • Declarative Hyperlinking in Jersey

    Mit Annotationen umgesetzt und dadurch stringly typed. Fehler werden erst zur Laufzeit erkannt.

  • Spring HATEOAS

    Kommt schon recht nahe dran. Insbesondere Building links pointing to methods geht stark in die Richtung:

    Method method = PersonController.class.getMethod("show", Long.class);
    Link link = linkTo(method, 2L);

    Allerdings funktioniert die Methoden-Auflösung zur Laufzeit und via Name als String. Die Parametertypen müssen explizit angegeben werden und die Typüberprüfung passiert ebenfalls erst zur Laufzeit.

Warum nicht Methodenreferenzen?

Da Java 8 endlich Methodenreferenzen eingeführt hat, liegt natürlich die Idee nahe, es damit irgendwie zu versuchen.

Stellen wir uns vor, dass wir den folgenden Controller hätten:

class HelloWorldController {

    public String hello() {
        return "Hello, world!";

Als erstes erstellen wir ein FunctionalInterface. Das Interface soll eine Referenz auf eine Methode der Klasse H darstellen, die keine Parameter entgegen nimmt und ein R als Ergebnis zurück gibt. Dementsprechend sieht das Interface wie folgt aus:

interface NoParam<H, R> {
    R apply(H h);

Dieses Interface kann aus einer Methodenreferenz erstellt werden:

NoParam<HelloWorldController, String> hello = HelloWorldController::hello;
// Aufrufen könnte man die Methode jetzt so (angenommen,
// someHelloWorldController ist eine Instanz von HelloWorldController):
String result = hello.apply(someHelloWorldController);
assert "Hello, world!".equals(result);

Folglich können wir damit dann die folgende Methode bauen:

<H, R> URI linkTo(NoParam<H, R> handler) {
    // Hier der Code, der die Routing-Informationen von handler ausliest und
    // daraus dann eine URI baut

Jetzt ist es möglich, aus einem anderen Controller heraus einen Link zu unserer gewünschten Methode HelloWorldController#hello() zu bauen:

URI helloLink = linkTo(HelloWorldController::hello);

Wenn wir ein Argument zu viel übergeben würden (zum Beispiel, weil wir denken, dass die hello-Ressource einen Namen entgegen nimmt, um einen personalisierten Gruß zu erzeugen, kompiliert der Code nicht:

java: no suitable method found for linkTo(HelloWorldController::hello)

Ziel erreicht. Um tatsächlich Pfad-Parameter zu unterstützen, müssen wir jetzt einfach (relativ mechanisch) weitere Interfaces einführen.

Erweitern wir zunächst unseren Controller um einen personalisierten Gruß:

public String greeting(String name) {
    return "Hello, " + name + "!";

Der Parameter name repräsentiert hierbei die Pfad-Variable name. Links dazu können dann folgenderweise erstellt werden:

URI link = linkTo(HelloWorldController::greeting, "Joe");

Dazu führen wir ein weiteres Interface ein:

interface OneParam<H, P, R> {
    R apply(H h, P p);

Wenig überraschend steht H hierbei für den Typ des Controllers, P für den Parameter und R für den Rückgabewert.

Desweiteren muss eine weitere Überladung von linkTo eingeführt werden:

URI linkTo(OneParam<H, P, R> handler, P param) {
    // Hier wieder Routing-Infos von handler auslesen und dann param einsetzen

Das ist zum Implementieren zwar ein wenig wortreich (für jede Anzahl an Pfad-Variablen ein eigenes Interface und eine entsprechende linkTo-Methode), aber das muss man zum Glück nur einmal tun und außerdem hat man ja auch nicht unendlich lange Pfade in der Praxis.

Viel gravierender ist jedoch: es funktioniert überhaupt nicht. Man kann zwar aus einer Methodenreferenz ein Lambda bauen. Allerdings geht die Information, aus welcher Methode das Lambda erzeugt wird, dabei verloren. Wir brauchen die Information, um welche Methode es sich handelt, jedoch, da wir ansonsten nicht an die Route kommen.

Proxies to the rescue

Da die Antwort auf die meisten Probleme in Java "(dynamische) Code-Generierung" ist, probieren wir es doch auch einmal damit. Genauer gesagt dynamische Proxy-Objekte. Die Idee ist dabei folgendermaßen:

  • Wir erzeugen uns ein Proxy-Objekt vom gleichen Typ der Handler-Klasse.
  • Wir rufen die Methode auf, die übergeben wurde (genauer gesagt, das Lambda)
  • Das Proxy-Objekt ruft nicht wirklich die eigentliche Methode auf, sondern merkt sich einfach, welche Methode aufgerufen wurde.
  • Wir holen uns die gemerkte Methode vom Proxy-Objekt.

Gehen wir davon aus, dass wir eine Klasse MethodResolver<T>, die die Proxy-Objekte erstellt, könnte unsere linkTo-Methode also in der Art aussehen:

URI linkto(Class<H> handlerClass, OneParam<H, P, R> handler, P param) {
    MethodResolver<H> methodResolver = MethodResolver.on(handlerClass);
    handler.apply(methodResolver, param);
    Method method = methodResolver.resolve();
    // Mit handlerClass und method kann man jetzt an die Routing-Informationen
    // kommen

Die meisten AOP-Rahmenwerke bieten Method-Interceptors an, mit denen man das recht einfach umsetzen kann. Für Proxetta könnte ein entsprechendes Advice zum Beispiel so aussehen:

 * MethodResolver advice applied on all methods. It puts the method in a class
 * variable that can be accessed later using reflection.
class MethodResolverAdvice implements ProxyAdvice {

    public Method method;

    public Object execute() {
        final Class<?> targetClass = targetClass();
        final String methodName = targetMethodName();
        final Class<?>[] argumentTypes = createArgumentsClassArray();
        try {
            method = targetClass.getMethod(methodName, argumentTypes);
        } catch (NoSuchMethodException e) {
            throw new RuntimeException(e);
        return returnValue(null);


Im Github-Repo java8_linking_experiments habe ich eine Beispielimplementierung für das Ratpack-Mikro-Webrahmenwerk umgesetzt.

Thu 03 December 2015

Let's Encrypt!

Seit heute steht Let's Encrypt als öffentliche Beta zur Verfügung. Mit Let's Encrypt kann sich jeder einfach und kostenlos ein X.509-Zertifikat erstellen lassen. Somit gibt es keine Ausrede mehr, nicht HTTPS zu benutzen. Deswegen versucht dieses Blog auch mit gutem Beispiel voran zu gehen:

Mon 10 August 2015

AmazonS3Client mit Pithos benutzen

Pithos, ein S3-Klon, der Cassandra zur Ablage benutzt, unterstützt aktuell nur Signaturen in V2. S3 unterstützt inzwischen aber eigentlich nur noch V4, weswegen der offizielle AmazonS3Client ein paar Probleme bei der Verwendung mit Pithos macht:

 Exception in thread "main" The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 1e04fbc1-bf91-4bb3-af1e-6829ce549524), S3 Extended Request ID: 1e04fbc1-bf91-4bb3-af1e-6829ce549524
      at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(
      at com.amazonaws.http.AmazonHttpClient.executeOneRequest(
      at com.amazonaws.http.AmazonHttpClient.executeHelper(
      at com.amazonaws.http.AmazonHttpClient.execute(

Mit einem kleinen (Trick|Hack) kann man den Client dann aber trotzdem dazu bewegen, V2 zum Signieren zu benutzen:

private static final String PITHOS_ENDPOINT = "";
private static final String ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE";
private static final String SECRET_KEY = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY";

// …

final AWSCredentials credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY);
final ClientConfiguration clientConfig = new ClientConfiguration();
final AmazonS3Client s3Client = new AmazonS3Client(credentials, clientConfig);

Getested mit dem AWS Java SDK Version 1.10.10. Mit etwas Suchen kann man das auch in den Issues finden: #277 und #372.

Sun 24 August 2014

Generic Heterogenous Containers Using Super Type Tokens

Wie man am Titel bereits leicht erraten kann: heute geht es (zum ersten Mal in diesem Blog?) um Java. Genauer: Wie man heterogene Container in Java umsetzen kann.

Zunächst zur Idee: Wir wollen eine Klasse Favorites implementieren, in die man seine Lieblingsobjekte speichern und auch wieder abrufen kann. Damit man nicht einfach irgendein beliebiges Objekt zurückbekommt, sondern ein Objekt von einem bestimmten Typ, übergibt man beim Speichern und abrufen die Klasse des Objekts.

Die API dafür sieht so aus:

public class Favorites {
    public <T> void putFavorite(Class<T> type, T instance);
    public <T> getFavorite(Class<T> type);


Der geneigte Java-erfahrene Leser wird direkt erkennen, dass das Beispiel aus Effective Java von Joshua Bloch ist. Das ist absolut richtig und auch Teile des hier gezeigten Codes stammen aus diesem Buch, das durchaus lesenswert ist (auch für Nicht-Java-Programmierer).

Die erste Idee, das zu implementieren, könnte etwa so aussehen:

public class Favorites {
    private final Map<Class<?>, Object> favorites = new HashMap<>();

    public <T> void putFavorite(Class<T> type, T instance) {
        favorites.put(checkNotNull(type), instance);

    public <T> T getFavorite(Class<T> type) {
        return type.cast(favorites.get(type));

Man nimmt also einfach das Klassenobjekt des Werts als Schlüssel in einer Map, um den Wert zu speichern. Beim Auslesen wird dann auch wieder das Klassenobjekt übergeben, womit man an den Wert kommt. Wurde für den Typ kein Wert hinterlegt, wird einfach nichts gefunden.

An sich scheint das auch ganz gut zu funktionieren:

Favorites f = new Favorites();
f.putFavorite(String.class, "Some string");
f.putFavorite(Integer.class, 1234);
System.out.printf("%s %d%n", f.getFavorite(String.class), f.getFavorite(Integer.class));

Auch der Fall, dass kein Wert hinterlegt wurde für den Typ, funktioniert einfach: favorites.get(type) gibt dann null zurück und null kann zu allem gecastet werden.

Und hätte Java jetzt nicht Type Erasure, wäre der Blogpost auch schon zu Ende. Da Java allerdings Type Erasure hat, stößt man recht bald auf ein Problem, wenn man auf versucht, ein Generic in den Container zu packen: List<Integer>.class ist ein Syntaxfehler. Das liegt daran, dass List<Integer> und List<String> dasselbe Klassenobjekt haben, nämlich List.class. Das bedeutet aber auch, dass man eine Liste von Strings in Favorites packen kann und danach als Liste von Integern auslesen kann (bzw. man kann es zumindest versuchen):

f.putFavorite(List.class, Arrays.asList("foo", "bar"));
List<Integer> values = f.getFavorite(List.class);

Wenn man das jetzt ausführt, scheint es sogar zu funktionieren. Jedenfalls läuft es anstandslos durch. Allerdings nur bis man dann den Wert tatsächlich einmal versucht als den angegebenen Wert zu verwenden:

Integer first = values.get(0);

Bei einer erneuten Ausführung wird jetzt eine Ausnahme geworfen:

Exception in thread "main" java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer

Bei genauem Hinschauen erkennt man auch bereits beim Kompilieren, dass hier potenziell zur Laufzeit etwas kaputt gehen könnte: warning: [unchecked] unchecked conversion
      List<Integer> values = f.getFavorite(List.class);
required: List<Integer>
found:    List

Ein gutes Beispiel dafür, dass man Compiler-Warnungen nicht einfach ignorieren sollte.

Das ist natürlich wenig zufriedenstellend. Ein Ausweg daraus sind die sogenannten Super Type Tokens, die man wie folgt verwenden kann:

TypeReference<List<String>> x = new TypeReference<List<String>>() {};

TypeReference kann jetzt zur Laufzeit nachsehen, welcher Wert als Typvariable übergeben wurde. Das wiederum kann dann Favorite für sich benutzen.

Die Implementierung von TypeReference:

import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;

public abstract class TypeReference<T> {
    private final Type type;

    protected TypeReference() {
        Type superClass = getClass().getGenericSuperclass();
        if (superClass instanceof Class<?>) {
            throw new RuntimeException("Missing type parameter");

        type = ((ParameterizedType) superClass).getActualTypeArguments()[0];

    public Type getType() {
        return type;

Und das angepasste Favorites:

import java.lang.reflect.Type;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class Favorites {
    private final Map<Type, Object> favorites = new HashMap<>();

    public <T> void putFavorite(TypeReference<T> typeReference, T instance) {
        favorites.put(typeReference.getType(), instance);

    public <T> T getFavorite(TypeReference<T> typeReference) {
        return (T) favorites.get(typeReference.getType());

    public static void main(String[] args) {
        Favorites f = new Favorites();
        f.putFavorite(new TypeReference<String>() {}, "Some string");
        f.putFavorite(new TypeReference<Integer>() {}, 1234);
        f.putFavorite(new TypeReference<List<String>>() {}, Arrays.asList("foo", "bar"));
        System.out.printf("%s %d %s%n",
                          f.getFavorite(new TypeReference<String>() {}),
                          f.getFavorite(new TypeReference<Integer>() {}),
                          f.getFavorite(new TypeReference<List<Integer>>() {}));

Die Ausgabe ist wie erwartet Some string 1234 null. Fertig ist er also, unser typsicherer heterogener Container, der auch mit Generics funktioniert (wenn auch etwas umständlich).

Oder fast. Wenn da dieses @SuppressWarnings("unchecked") nicht wäre. Immerhin haben wir vor einem Augenblick erst gesehen, dass man Compiler-Warnungen nicht ignorieren sollte. Und tatsächlich kann man auch für die neue Favorites einen Fall konstruieren, bei dem zur Laufzeit eine Ausnahme fliegt:

static <T> List<T> favoriteList(Favorites f) {
    TypeReference<List<T>> typeReference = new TypeReference<List<T>>(){};
    List<T> result = f.getFavorite(typeReference);
    if (result == null) {
        result = new ArrayList<T>();
        f.putFavorite(typeReference, result);
    return result;

public static void main(String[] args) {
    Favorites f = new Favorites();
    List<String> listOfStrings = favoriteList(f);
    List<Integer> listOfIntegers = favoriteList(f);
    String booooom = listOfStrings.get(0);

// java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String

Um diese Ausnahme zu vermeiden, müsste man die Typ-Argumente durchgehen und schauen, ob noch irgendwo eine TypeVariable vorkommt. Ein Beispiel, in dem das gemacht wird, ist GenericType<T> aus JAX-RS (Quelltext). Das würde aber noch immer zur Laufzeit eine Exception werfen und nicht zur Compile-Zeit einen Fehler produzieren.