Shaved FAQ

Shaved FAQ

What happened to your beard?

I shaved.

Have you lost a bet?


Who are you? How did you get past security?

The same guy as last week and the months before that, I shaved, sorry to shock you, it happens.

Why did you grow a beard in the first place?

Because I felt like it.

Why did you shave?

Same reason I decided to grow a beard, I felt like it.

But why this Sunday?

Stop it, I shaved, it isn’t special, most men do it a couple of times a week, get over it!

It looks much better this way!

Thanks for the compliment, don’t get to attached, the beard will probably return.

You look so much younger!

I’m not.

It looks much worse without a beard!

Why do people comment so much on facial hair, I’m not judging your hair/clothing/weight etc.

Anyway, to quote The Dude Lebowski: Yeah, well that’s just your opinion, man.
(I’m trying to follow the philosophy and lifestyle of dudeism)

Kids amplify your life

Kids amplify your life

Three years ago my life was pretty normal, I had a house, a wife, a car and a decent job. Everything was nice and easy, very mundane. And then, my daughter was born.

Impact of kids

Every parent will tell you two things:

  • After having kids your life as you knew it… gone
  • But the life you have afterwards, is much more satisfying!

When I didn’t have kids and I heard people say this I thought they were just making it sound better than it was. But now I know, it is true.

The thing they never tell you is *WHY* their life is better now, and I think I have the answer.

Ups and downs

In life you need ups and downs. If you have a life where you have all the money in the world and no set backs… it can still be a boring non-satisfying life. Also, I’ve seen people losing their job and they still rate their life as pretty satisfying. It all depends on your frame of reference! If you only have ups, it is dull, you need those mood swings.

When I didn’t have kids, these things made me happy:

  • Heard a good joke in the coffee corner
  • Won a volleyball match

And the worst things that happened to me:

  • Missing a train…
  • Running out of coffee

That is pretty much it, again: very mundane and a bit dull.

Kids amplify your life

Once I’ve had kids, all the things above seem nuances. This is what my ups and downs now look like:


  • Was an hour late at work because kid was screaming: “Today, no pants!”
  • The daily *catch me* kick in the nuts
  • Kid broke something valuable, again
  • Constant fear of kids getting hurt, disappointed etc

And of course, the obvious *I have kids* ups!

  • Kid walks/talks/does something for the first time
  • Good night kisses
  • “Daddy! I’ve missed you!” after being gone for an hour
  • Many more moments…

Kids cause greater daily downs and thankfully also bigger up moments, they amplify your feelings and make you feel alive!

Stop the Rot, the Rules of Refactoring

Stop the Rot, the Rules of Refactoring

A lot of times you hear experienced programmers talking about ‘smelly’ code. These ‘code smells’ are things that just look or feel wrong. Often programmers don’t immediately have a clear idea on how to fix it, but it ‘smells’! These smells often happen when the code ‘rots’.

Let’s do a quick check:

  • Are there pieces of code you’d rather not change?
  • Are (parts of) the application sometimes scrapped and rebuild from scratch (over and over)?
  • Do pieces of code exist that turned out much more complex than you had initially imagined?
  • Do you have pieces of code that feel out of place?
  • Is it hard to break up some large classes and/or methods?
  • Do you have a hard time coming up with names for certain classes?

If you’ve answered yes to one or more questions, you are probably suffering from code smells and maybe even advanced code rot.

What is a code smell?

Most of the time a piece of code has a smell when the design underneath is wrong. Often this is not visible at first, but slowly appears when the component gets larger. Classes get too large, methods are hard to break up, some classes or methods feel out of place. Even simple things as struggling with naming a new class is a sign there is something wrong. All of this are code smells and experienced programmers have developed a nose for this. Most of the time this is a sign there is a larger problem. It isn’t something a bit of code can fix, there is probably something wrong in the design. Is this component doing the right thing? Does it have the right responsibilities? Take a step back and look at the complete picture. What problem are these classes/methods trying to solve?

What is code rot?

When code smells, more often than not, people will continue working on it, adding functionality. Maybe they don’t notice the smell or they don’t take the much needed step back to investigate the problem. This leads to faster code rot. So what exactly is code rot you might ask? Every piece of code starts to ‘rot’ the moment it is written. Eventually the code needs to be replaced and becomes too difficult to maintain. Hopefully this happens in the far future, 20-30 years from now. Unfortunately this is often not the case, there are parts being rebuild and scrapped while the application is being developed.

Each time new code is written, added or changed, the new code starts out as rotten as the code that it depends on. This can cause, with a tiny bit of rotten/smelly code, for an entire application to be trashed! It is highly infectious.

And the answer is… refactoring!

Refactoring is the magic word here, but… it sounds easier than it actually is. At first people will deny there is a problem, sometimes they don’t see it, they don’t share your disgust. Eventually it’ll continue to rot and more people will notice the smell. And once this smell has become unbearable drastic measures seem to be needed. This brings us to rule #1 about code smell:

1. Refactor early, refactor often.

The sooner you refactor and remove some rotten code, the less likely it is to spread and the easier it is to remove. Don’t seek for approval, do something about it. But remember, take a step back first. Sometimes you are rewriting a piece of code, and the result is just as bad as before. The reason is that a design flaw is lurking in the shadows. If the responsibilities between two components is wrong you can scrap a piece of code and rebuild it, but the same problems will keep surfacing. So lets make this rule #2:

2. Before refactoring, take a step back, eliminate possible design flaws.

You’ve taken a step back, looked at the complete picture, fixed the responsibilities, time to do the refactoring! No!! There is a third *very* important rule. Refactoring means changing a piece of code without changing its behavior. How do we do this? We write tests! Only if you have proper testing in place you can start thinking about refactoring. How else will you be certain that a piece of code still has the same behavior as before?

3. Only refactor when you have proper tests.

If you follow these three simple rules, nothing can go wrong. You’re detecting and fixing code smells early and often, you’ll stop the rot as soon as possible to make sure it doesn’t spread, keeping the application healthy. There isn’t a possible design flaw hiding in the shadows, we’ve taken a step back and eliminated that. And finally: We have tests in place that ensure we don’t change any behavior that was painstakingly added to the rotting code.

If you follow these rules you’ll end up with code that is readable, easier to maintain, easy to change (agile code!).

In short: Healthy code!

Programming, testing, documentation

Programming, testing, documentation

This week I’ve started work on a new project. This project has strict rules regarding documentation and testing. First of all, everything needs to be modelled in EA (Enterprise Architect), from use cases to the REST API fields. Next we need to perform the actual programming, and the testers write down their tests.


What is in the documentation?
- A description of who inputs what and what the results should be

What is in the code?
- A description of who inputs what and what the results should be

What is in the test?
- A description of who inputs what and what the results should be

Come to think of it… programming IS testing IS documenting, we are all doing exactly the same thing! This sounds a bit wasteful doesn’t it?


Is this (triple) duplication a bad thing? Well, not exactly, it is a good thing the tests and the code are written twice, this greatly reduces the amount of bugs and errors that always happen when writing something. Thinking and writing those things twice is actually a good thing!


There is a major problem with having duplication: divergence.

What happens when a programmer decides to change the fields in the REST API? The moment he does that the documentation isn’t correct. In the best case all three, documentation, tests and code, get (manually) updated. But this just never happens in real life… Tests start lacking behind, documentation ends up being wrong, code ends up doing things we don’t expect. Mostly because we are human and don’t like doing things manually, we are focused on one thing, the code, or the test, or EA.

Another problem is that all three documents are ‘alive’. During design sessions people will change the documentation in EA, but this isn’t yet implemented in the code. Also the test isn’t yet updated. So at every possible moment in time none of the documents hold the entire truth. If you read the documentation you can never rely that the code is actually implemented.

Verify and automate!

If all three things (documentation, tests, code) do the same thing (namely ‘describing how the program should behave’) why don’t we automatically verify this?

If you have a REST API description with all fields and types in Enterprise Architect, why not verify this with the actual code?
A use case describes all possible paths and the expected outcomes, this is 100% the same as your (automated) tests should be!

We could (and should!) automate and verify this.

This morning the architect showed us three random examples from pieces of documentation in EA (to teach us how to work with the tool). But NONE of the three examples were complete and/or correct. Not a single example was the same in the documentation as implemented in the code. If this is the case, why even bother writing the documentation? It will keep diverging and become more and more useless.


In a previous project I did, we didn’t have any real documentation. We ‘only’ had Fitnesse tests! But those tests were just as readable as documentation.
The big advantage was that we had a complete set of tests that:

  • are readable as documentation (like use cases)
  • are executable and verify our code does what we expect (and visa versa)
  • ensure the documentation, tests and code are the same

This fixes all the problems regarding divergence and testing. When we wrote down the specification we instantly have our use cases. How do we know the code isn’t ready yet? When this documentation is executed, it fails! Writing this initial test text should be done together with programmers, testers and the product owner.

So is Fitnesse the best tool for the job? Probably not, it isn’t as elaborate as Enterprise Architect (thankfully?) but maybe it could use some more structure. The big advantage of EA is the re-use of (partial) use cases when we create a new part of the application, you just drag and link parts together. In my experience people don’t treat Fitnesse with the proper respect (it is ‘just’ test code). For some reason Fitnesse code always ends up as spaghetti code, isn’t reusable and becomes a mess, while it should be considered the entire complete and only truth!

Most tools seem to be very developer focussed, Fitnesse and Cucumber. What other tools can help us accomplish this goal of automating and verifying this trinity? Are there better alternatives?

Devoxx BE 2014: Aftermovie

Devoxx BE 2014: Aftermovie

A couple of weeks ago was Devoxx in Antwerp again, the largest annual European Java conference. As always I was there, with my camera, to capture the amazing atmosphere:

We (me and my colleagues) had a great time, and learned some new things. But most of all, we met great people and received a lot of inspiration. This year I´ve done a short 5 minute Ignite talk on Mutation testing. This was my first ´Ignite´ session and it is hard! The format of an Ignite session is, 20 slides, 5 minutes, auto forward each 20 seconds. This means timing is everything. A 50 minute talk is much easier, you decide when you´ve told enough and press next.

Also, my J-Fall presentation on mutation testing is now available online (50 minutes, in Dutch).

⋆Brag⋆ It was voted as best session of the conference with 61 people! Also of all the visitors it was voted second ´most populair´, surpassed by one vote, but the other session had more people in the room!

The Java 9 'Kulla' REPL

The Java 9 'Kulla' REPL

Maybe it’ll be part of JDK 9, maybe it won’t… but people are working hard on creating a REPL tool/environment for the Java Development Kit (JDK). More information on the code and the project is available as part of OpenJDK: project Kulla.

Some of you might not be familiar with the terminology ‘REPL’, it is short for Read Evaluate Print Loop. It is hard to explain exactly what it does, but very easy to demonstrate:

|  Welcome to the Java REPL mock-up -- Version 0.23
|  Type /help for help

-> System.out.println("Hello World!");
Hello World!


The idea is that you have a place you can enter Java and run it, without main method, class structure. You can build instances, and alter them while you type. For example you’ll able to do Swing development while you type:

-> import javax.swing.*;
-> JFrame frame = new JFrame();
-> frame.setVisible(true);

Now we have a visible frame, you can drag it around, resize it etc.

-> JPanel panel = new JPanel();
-> frame.add(panel);
-> JButton button = new JButton();
-> panel.add(button);

Suddenly our frame has a panel, and the panel has an empty button! You can prototype, do live coding and you have instant feedback.

-> button.setText("Push me!");

Now the button has text, but pressing it still does nothing…

Push me!

-> button.addActionListener(e -> System.out.println("Hello World!"));
Hello World!
Hello World!
Hello World!

And there we go, a final simple lambda creates a working “Hello World!”-button.

It is also possible to load a repl script from file, allowing you to share, store and run scripts. This is done using the ‘/load’ and ‘/save’ commands. You can even ‘/dump’ all the created classes in a directory.

I’m very curious how people will be using the REPL in the future, some use cases:

If you want to try out Kulla, it took me literally 20 minutes to get up and running on my MacBook. Just follow the instructions on AdoptOpenJDK, but instead use as codebase. After building the JDK, go to ./langtools/repl and look at the README.

Kill all mutants

Kill all mutants

The post below is the content from my 2014 J-Fall and Devoxx Ignite presentations. You can check you the slides here:

We all do testing

In this day and age you aren’t considered a real Java developer if you are not writing proper unit tests.
We all know why this is important:

  • Instant verification our code works.
  • Automatic future regressions tests.

But how do we know we are writing proper tests? Well most people use code coverage to measure this. If the percentage of coverage is high enough you are doing a good job.

What is a test?

First let’s look at what a test actually is:

  1. Instantiate classes, setup mocks.
  2. Invoke something.
  3. Assert and verify the outcome.

Which steps are measured with code coverage? Only steps 1 and 2. And what is the most important thing for a test? It is the third and final step, the assertion, the place where you actually check if the code is working. This is completely ignored by code coverage!

I’ve seen companies where management looks at code coverage reports, they demand the programmers to write 80+ or 90+% coverage because this proves the quality is good enough. What else is a common thing in these organisations? Tests without any real assertion. Tests written purely to boost coverage and please management.

So code coverage says absolutely nothing about the quality of our tests? Well, it does tell you one thing. If you have 0% coverage, you have no tests at all, if you have 100% coverage you might have some very bad tests.

Mutation testing

Luckily there is help around the corner, in the form of mutation testing. In mutation testing you create thousands of ‘mutants’ of your codebase. So what is this mutant you might ask? A mutation is a tiny singular change in your codebase.

For example:

// Before:

if(amount > THRESHOLD) {
    // .. do something ..


// After:

if(amount >= THRESHOLD) {
    // .. do something ..


For each mutant the unit tests are run and there are a couple of possible outcomes:mutant_killed



If you are lucky a test will fail. This means we have ‘killed’ our mutant. This is a positive thing, we’ve actually checked that the mutated line of code is correctly asserted by a test. Now we immediately see the advantage of using mutation testing, we actually verify the assertions in our tests.


Another possible outcome is that our mutant has survived, meaning no test fails. This is scary, it means the logic we’ve changed isn’t verified by a test. If someone would (accidentally) make this change in your codebase, the automatic build won’t break.



In Java (and other languages as well) there are frameworks for doing mutation testing. One of the most complete and modern frameworks for doing mutation testing in Java is called PIT. The generation of mutants and the process running the tests is fully automated and easy to use, just as easy as code coverage. There are Maven, Ant, Gradle, Sonarqube, Eclipse and IntelliJ plugins available!

What about performance?

Using mutation testing isn’t a silver bullet and it doesn’t come without any drawbacks. The major disadvantage is performance. This is the reason it never took off in the 1980s. At that time it would take an entire evening to run all your unit tests, so people could only dream of creating thousands of mutants and running the tests again. Luckily CPU’s have become a lot faster, and PIT has other tricks to speed up the process.

One thing PIT does is that it uses code coverage! Not as a measurement of the quality of your tests but as a tool. Before creating the mutants PIT calculates the code coverage of all unit tests. Now when PIT creates a mutant of a particular line of code it looks at the tests covering that line. If a line is only covered by three unit tests, it only runs those three tests. This greatly decreases the amount of tests needed to run for each mutation.

There are other tricks as well, for example PIT can track the changes in your codebase. It doesn’t need to create mutants if the code isn’t edited.


Code coverage is a horrible way of measuring the quality of your tests. It only says something about the invocations but nothing about the actual assertions. Mutation testing is much better, it gives an accurate report on the quality and you can’t ‘game’ the statistics. The only way to fake mutation coverage is to write real tests with good assertions.

Check it out now:

Building Commander Keen on OS/X

Building Commander Keen on OS/X

Below is a build-log on how to build Commander Keen: Keen Dreams (which was recently released on Github) on OS/X using DOSBox and a shareware version found online.

Step 1:

Download and install DOSBox

Step 2:

Download: Borland C++ 3.0
Download: Commander Keen - Keen Dreams source code

Step 3:

Create a new folder, this will be your DOSBox mount-point.

Step 4:

Install TASM:
Copy all the contents of the directories DISK1/DISK2/DISK3 from to \TEMP

Step 5:

Install Borland C++ 3:
Copy all the contents to \BORLANDC.

Step 6:

Copy all the source files from Commander Keen to \KEEN.

Step 7:

Fire up DOSBox, mount the mount folder to C.
Put the following paths to PATH:

Go into C:\TEMP and run the installer to install TASM.

Go into directory C:\KEEN\STATIC and run ‘make.bat’
In the directory C:\KEEN and run ‘BC’ to start Borland C++.

To change the Borland directories to the correct path go to: Options > Directories and change the paths to C:\BORLANDC\*.

Step 8:

Compile and run! It creates the binary for me KDREAMS.EXE.

But sadly, when I run the executable it says “Can’t open KDREAMS.MAP!” :-(
It turns out you’ll need to own the game’s actual content before you can run this source code.
Thankfully a shareware version can be downloaded here: This corresponds with the 1.01S version of the released source code, which is also released here.

Copy the missing files (SHL, MAP, AUD, etc) from the shareware version and play your own compiled Commander Keen!


Screen Shot 2014-09-18 at 20.32.40

Finding an image in a mosaic

Finding an image in a mosaic

Browsing the Ludum Dare (see previous post) website I found this post from a friend Will. He made the following mosaic:


Pretty interesting to see, I’ve already worked with him on improving the algorithm to generate these mosaics in the past. But next he set me a challenge: Find your own game thumbnail, it is in there somewhere!

This is the screenshot of my game, used as thumbnail:


So I went though the thumbnails, one time.. and a second time… then I decided to solve it like a real programmer:



How does it work? Well it is pretty simple:

  1. Input #1: mosaic.jpg
  2. Input #2: Amount of thumbnails width and height
  3. Input #3: screenshot.png
  4. The program resizes my game screenshot to the thumbnail size.
  5. Next it loops over all sub-images of the mosaic.
  6. For every sub-thumbnail: Calculate the error  (+=Math.abs(mosaicPixelValue - screenshotPixelValue) for each color, for each pixel)
  7. Store the location of the thumbnail with the smallest error!

That is it, solved it in 10 minutes of coding!
(and another 10 minutes to make the visual conformation and make the animated gif).

Ludum Dare #30: A (dis-)connected world...

Ludum Dare #30: A (dis-)connected world...

Last weekend the 30th Ludum Dare competition took place. For those us you unknown with Ludum Dare, this is a very short international game programming contest. You are allowed to use any tool or language but there are strict rules:

  1. The theme is revealed at the start (and the game must match this theme).
  2. You get 48 hours, nothing more or less.
  3. Every image, sprite, song and/or sound effect in the game should be made within these 48 hour.
  4. The result is open source (but you pick the license).
  5. You work alone.

(There is also a ‘Jam’ version where you can work in teams, can do closed source, can use images/sounds and you have 72 hours)

A (dis)connected world...

Connected Worlds

The theme this year was ‘Connected Worlds’. This is a pretty broad term, so I started to think. How about a world where the main character is on one planet, and his love is on another planet. The planets are tantalisingly close (nearly touching) but out of reach. Our hero has to build a rocket to reach his love.

Game style

Last Ludum Dare (LD) I’ve worked on my Javascript skills and produced a little framework. This allows me to easily implement an old-school ‘point and click’-type adventure game. The resulting game should look and feel like the old Monkey Island, Day of the Tentacle games.

Drawing drawing drawing…

The decision to make a point-n-click game has a huge impact on how I get to spend my time during the contest. This type of game needs a lot of images, sprites and of course fun puzzles. In the end I think I’ve been busy drawing 90% of the time (on paper and in GIMP) and maybe 10% actual programming. After 48 hours my hands were cramped up from all the drawing instead of typing heh (I need a digital draw-tablet).

The result

First off, I’ve learned a lot about game design and Javascript programming, also I’ve learned how to draw and edit images much faster. At the start the process from getting an image from paper to coloured digital sprite took a long time, at the end of the contest it was all automatic.

Anyway, I’m pretty pleased with the result, please play it here.
Also, voting is still open, if you have a Ludum Dare account (or sign up) you can cast a vote here.


I won't enter a teleportation device, ever.

I won't enter a teleportation device, ever.

In the future somebody will inevitably invent a teleport, no doubt about that.

But how will it work?

Digital teleportation

The most likely way to teleport would be to digitalise yourself. Some yet undiscovered very high resolution MRI/CT scanner will scan every atom in your body and send this over to the receiver. This atom printer will build up your entire body again.

However, during a work lunch discussion, I came up with some scary fundamental problems with teleportation.


What would happen if, during transmission, we get a failure? We don’t want to end up with a failed teleport… which would mean the person getting teleported is dead.

That is why we need to have some kind of two-phase commit. First we digitalise the person, we send this over the line, build the person up on the other side. Once this process has been completed, we ‘delete’ the original copy. Because we don’t want to end up with thousands of clones.

Wait… what? Delete?

What would it feel like stepping into the teleporter? First a copy of you is made, this copy eventually walks out of the receiving end. But what happens to you? You’ll step into a machine, which makes a copy, and disposes you! You’ll be exterminated, killed, pushing up the daisies, your metabolic processes will be history, kicking the bucket, you’ll be an ex-parrot.


Lets not dwell too long on the loss of the old you. Of course YOU are also the one walking out of the teleporter, where nothing has happened but a successful teleport.

But is that really… you?
What defines you?
Are you just a selection of atoms clumped together?

If we make an exact copy, is that still you?

Did you know that (according to some research) every year almost 98% of the atoms currently in your body will be replaced? That would mean that a year from now, you will just be 2%… you!

Conclusion: I hope they won’t invent a teleporter while I’m alive.

Devoxx4Kids UK 2014: the video

Devoxx4Kids UK 2014: the video

The other thing I did while I was in London was volunteer and film at the first UK-based Devoxx4Kids.

Here is my video that sums it all up:

It was awesome being there, watching the kids play and learn at the same time. The volunteers were absolutely amazing (a lot of them!) and the atmosphere was very relaxed.

Devoxx UK 2014: the video

Devoxx UK 2014: the video

I’ve been filming again at Devoxx UK, here is the final cut:

Brings back so many great memories.

Including the night in the pub when ‘we’ won against Spain, 5-1

Review: Devoxx UK 2014

Review: Devoxx UK 2014

A year ago Devoxx crossed an ocean for the first time. After all the events in Belgium (Antwerp) and France (Paris) a new satellite event was launched in the UK (London). And here I am again, in a London hotel room, after two days of Devoxx UK.

Day 1: Getting started

With a silly one hour jetlag (which shouldn’t be a thing, but is…) I was awake and at the venue very early. Slowly but surely people started flooding in on the exhibition floor. It quickly became clear there were more visitors than last year. On the exhibition floor a lot of smaller local startups were present. But surprisingly I haven’t had any sales pitch at all, this is something other conferences can learn from. Only genuinely interesting people talking about content.

There was also a corner where Devoxx UK had invited cool upcoming hardware like the NFC ring and Crazyflie tiny quadcopter. These projects quickly generated a crowd of people shouting: “Shut up and take my money!” The problem was, they didn’t have anything for sale… just an “you can order online”. This is a bit of a shame, I’m pretty sure they are now missing at least a dozen of impulse purchases. They should do something about this next year.

Day 1: The keynote

The awesome thing about Devoxx is… the lack of sponsored talks and keynotes. At least, that is how it seems to be. I really haven’t seen any session that even had a hint of sponsor.

This year the keynote speaker was Dan North and he talked about some of his personal experiences. He talked about moments where colleagues did or said things that really affected him and his career. Things you say and do at work might have a lot more influence than you can imagine. The best part of his story (in my opinion) was how a colleague tricked him into pair programming. If you keep asking “hey buddy, can you help me with this?” eventually your ‘buddy’ is just sitting next to you all the time.

During the keynote two amazing artists from Smartup Visuals created a drawing of Dan, later they hand customised conference t-shirts for the visitors.

Day 1: The sessions

As ‘Devoxx videographer’ I can’t always just pick a session and settle down. Most of the time I walk around and only settle down where there is a good opportunity. The first session I attended was by Venkat Subramaniam who talked about the new Lambda expressions in Java. He is a great clear speaker, to the point.

After filming some more I settled in room 2 with Dick Wall for the second session. His talk is named: “What have the monads ever done for us?”… and as you can guess it is about lambdas as well. His talk was a bit more theoretical, naming all the different theoretical objects and patterns (like monoids, monads and functors). Good talk, great speaker, best lambda related talk I’ve seen yet.

The third sessions I had the chance to see was by James McGivern. He’s a programmer with a math background and he has very similar interests as I do. His talk was about ECC (Elliptic curve cryptography) versus RSA, explaining how these security algorithms and the math involved work. I absolutely loved his talk, everything was explained so clearly it reminded me of Numberphile and Computerphile (two related YouTube channels I adore).

Instead of following more tracks I got distracted filming the Crazyflie quadcopter and shooting them from the sky using an automatic NERF gun which can only be fired wearing the NFC ring. I also had a nice discussion about the future of affordable hardware and 3d printing with Dick Wall. The thing we did that evening was to attend the IBM Sensor hacking challenge.

In this hands on session we had to form teams of 4-6 people, each team got an Arduino and a set of 30+ sensors which can be attached. The challenge was to build the coolest piece of hardware with this. There was only one rule though: We had to use WebSphere to communicate with the Arduino. This made absolutely no sense at all to me… we ended up having to write JSP pages that send signals to the Arduino to read/write from the sensors. Even worse was the deploy cycle (should not be needed, but was): stop websphere, kill hanging processes, reboot Eclipse (!), start websphere again, deploy new pages.

The winning team was the Crazyflie crew. They took their quadcopter, fired up the Arduino IDE (boo!) and made the firmware in the Arduino language. In the end they could fly the quadcopter with a joystick attached to the Arduino. Clearly this wasn’t according to the only rule we had, but it was just too cool and had to win.

Day 2: Sessions

The second day of Devoxx UK started with a session by two fellow Dutchies, Regina and Linda. They talked about a pattern they found to change things in projects. They all did this in a “Punch and Judy show”-style, which didn’t quite work in my opinion, it was very rusty and read out. Although the underlying message itself is simple and sound, they started calling everything a pattern. It is a good thing to organise brown bag lunch sessions, sure, but please don’t call this a “brown bag pattern”.

The second session of the day was “Is Your Code Parallel-Ready?” by Maurice Naftalin. His session was also about the new lambdas in Java 8. It had a nice buildup and sketched a good problem, the only thing missing was the code of the solution. The solution was described/hinted but I’d love to have seen the actual code there for clarity. The speed of this session was very slow and I didn’t learn anything new, which is a bit of a waste of time.

Next I went to a panel of: Martijn Verburg, Ben Snipe, Stephen Colebourne & Ted Neward. They talked about the patent wars going on now between Google and Oracle. The final conclusion by Stephen was probably the best: Oracle and Google are dumb, dumber and dumbest; neither can stop now and they’ll likely settle.

The title “Modern Web Architecture, 2014 Edition”, by Ted Neward, is a bit misleading. Because 80% of the talk is a big history lesson of the origin of ‘the web’. The final 20% was more about common sense. Don’t get vendor locked in, think about creating a platform with multiple possible entry points, not just a website. Not really what I expected to hear, interesting nonetheless.

The final session was by Arun Gupta and Antonio Goncalves, they quickly went through 50 new features in Java EE 7. For some reason I’m not fond of the way Java EE is going. All the logic is being put in annotations. I predict a new term ‘annotation hell’ which is going to replace the ‘xml hell’ we had a couple of years ago. I’ve been warning about this since 2008, and it is getting worse and worse.

Final keynote

In the final keynote Martijn Verburg summarized the things he learned this Devoxx UK, some of the trends he noticed. There were a lot of lambda talks, maybe a bit too much. A couple of years ago there would have been a lot of alternative language talks (JRuby/Groovy/Scala/etc) but there weren’t a lot of those talks anymore.

Then Dick Wall hit the stage, and actually continued from what we’d been talking about on the first day: cheaper electronics. The Arduino is cool and kind of cheap, so is the Raspberry Pi… but for a real Internet of Things we need devices much smaller and much cheaper. It doesn’t have to do graphics for example! It can probably be a fraction of the cost. Oh and did you know Dick Wall’s dog has a Fitbit? (true story).

Finally Arun Gupta and Audrey Neveu talked about Devoxx4Kids, which is still gaining a lot of popularity all over the world. But we can always use more volunteers and new events!

Raspberry Pi emulation on OS X

Raspberry Pi emulation on OS X


Building for a Raspberry Pi in an emulator is just as slow as on the actual Pi. There is a slightly faster method involving chroot. But if you really want speed you’ll have to set up a cross compiler environment or try this other cross compiler setup.

Also: Links in the article below seem to be broken and it might not work anymore.

Original (outdated) article:

Today a colleague and I wanted to install gnuradio on a Raspberry Pi. This can than be combined with the amazing RTL-SDR dongle. This dongle this is a DVB-T USB stick, but can be turned into full software defined radio.

More information on that can be found here:

Compiling gnuradio

When trying to compile gnuradio on the RPi (Raspberry Pi) we followed this description. But we quickly ran into a problem, compiling would take 20+ hours!

After running ‘make’ and grabbing a cup of coffee we set ourself a new goal, is it possible to emulate the RPi on our fast Macbook instead?


After following a couple of guides that didn’t work we finally managed to get Qemu up and running, this is what we did:

  • Install and upgrade Xcode to 4.3 or above
  • Install the latest version of Homebrew

Now we need to modify the Homebrew formula (which downloads and install qemu) to the correct version:

osx$ vi /usr/local/Library/Formula/qemu.rb

I’m using the osx$ prefix for commands that are executed on your OS X machine, pi$ for commands on the virtual Raspberry Pi.

Use the following file to get the working version 1.7.1 (other versions had SCSI problems): qemu.rb

require 'formula'

class Qemu < Formula
  homepage ''
  url ''

  depends_on 'jpeg'
  depends_on 'gnutls'
  depends_on 'glib'

  fails_with :clang do
    build 318

  def install
    system "./configure", "--prefix=#{prefix}",
    system "make install"

After setting qemu.rb to the correct version you can install qemu:

osx$ brew install qemu --env=std --cc=gcc-4.2

Now check if qemu is installed correctly:

osx$ qemu-system-arm -version
QEMU emulator version 1.7.1, Copyright (c) 2003-2008 Fabrice Bellard

Now we need to download two thing:

  1. Linux kernel
  2. Raspbian image

To download the linux kernel:

osx$ curl > kernel-qemu

Now we’ve downloaded the latest version of the raspbian image.
In our case: 2014-01-07-wheezy-raspbian.img

First boot

Now it is time to start the image in the emulator:

osx$ qemu-system-arm -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw init=/bin/bash" -kernel kernel-qemu -hda 2014-01-07-wheezy-raspbian.img

This first boot is a bit special because we only initialize /bin/bash. This is because we need to make two changes to the system:

We need to add a comment to this file:

pi$ vi /etc/

Comment this line by placing a # in front of the line:


Now create the following file:

pi$ vi /etc/udev/rules.d/90-qemu.rules

And put in the following content: 90-qemu.rules

KERNEL=="sda", SYMLINK+="mmcblk0"
KERNEL=="sda?", SYMLINK+="mmcblk0p%n"
KERNEL=="sda2", SYMLINK+="root"

Now we can stop the emulator and make one final change, the image file is a bit small and we need to increase the size before we continue:

osx$ qemu-img resize 2014-01-07-wheezy-raspbian.img +8G

From now on we can do a normal boot (save this command) by removing the “init=/bin/bash” part:

osx$ qemu-system-arm -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw" -kernel kernel-qemu -hda 2014-01-07-wheezy-raspbian.img

The last thing we need to do to get our virtual Raspberry Pi up and running is:

pi$ sudo ln -snf mmcblk0p2 /dev/root
pi$ sudo raspi-config

In this menu, you can “Expand filesystem” to make use of the increased image size (need to reboot afterwards).

Now you are ready to explore the raspberry pi without actually needing one.

(Broken) sources:

Some problems we’ve encountered:

  • qemu raspberry pi boot getting stuck in ‘scsi’ loop (fixed by using version 1.7.1)
  • Disk size problems, resize didn’t work, expand filesystem didn’t work (fixed expanding and using ln -snf)