If you want to create an object in Java there is only one way: use a constructor.
Constructors in Java are different from all other methods. They don’t have a return type and can’t be invoked, only using the new keyword. Other tasks performed by constructors are:
- Initialisation of class variables
- Call the default contructor of the superclass if no constructor is present
- Initialisation of instance variables
- Execution of constructor body
Here is a common example of a constructor in Java:
If you want to create an immutable object in Java you need to have a constructor to initialise the final fields. Lets look at another example:
Immediately a problem becomes clear… if you read the code you have absolutely no idea what all the arguments mean. What is 200? What is 95.0? We have to look at the API or open the code of Person to see what arguments we can supply or have been supplied. Thankfully there is a design pattern that solves this problem, the Builder pattern.
The builder pattern is an ‘object creation software design pattern’. It can be used to create a immutable objects in a fluent, readable way.
Reading this code is much clearer, we don’t have to guess what the arguments mean, it is crystal clear. So we’ve got a good solution, right? Well, not really, I’ve got a big problem with the Builder pattern. For a simple immutable object as the Person above with just six fields we need the following code:
Woah… that is a stupendous amount of code just for a better readable immutable way of object construction. I don’t have a good solution for this problem, the Java language doesn’t have a good way to construct objects (especially immutable objects) in a readable manner. Constructors end up having a lot of confusing nameless arguments; or you’ll have to write a huge Builder class with a nice fluent interface.
The Java language is constantly evolving and there are now proposals to add ‘value types’ in Java. Reading through the proposal it seems the only way to construct the value type will be using the constructor, but I’m afraid this will quickly become a burden again. I’d love to have a better way to construct objects and (in the future) values, although I have no idea what it should look like. I’d love to have a fluent way of object creation without having to code a big Builder class, preferably in the language itself.
Would it be possible to change to language in a backwards compatible way to allow this?
One possibility would be to ‘steal’ from other languages, for example Moose Perl:
This has the readability advantage and has the flexibility of the Builder pattern (not needing dozens of overloaded constructors).
Would something like this be a good addition to the Java language?
As some readers might know, I sometimes play a programming game called Core War. This afternoon I was browsing through some old ‘Core Warrior’ newsletters which John Metcalf has collected here. When reading an article about an old successful warrior called ‘Thermite II’ I came across this tiny unsolved mystery:
When executing the code of the second warrior “Killing Hazy Shade Of Winter III” I noticed there is a false assumption in the email. The warrior doesn’t need to die in three cycles!
The first round it’ll execute:
LDP.B #0,#0 (and become LDP.B #0,#-1)
Then it executes:
JMP <-1 (to a location two higher than the warrior, which is usually empty code… but not always!)
The final line in Thermite’s code is a JMP instruction to the launch-instruction which loads Thermite’s bomber ‘Brand’. If John K W’s warrior was loaded at a position 100 or 101 from Thermite it would parasite into the bomber and become Brand before Thermite itself does. This is how the small seemingly suicidal warrior can kill Thermite.
After 18 years, Q.E.D. :-)
As input he is using thumbnails from the Ludum Dare game programming contest.
To make such a program you basically need four things:
- A collection of thumbnails/photos/images to place
- Divide the target mosaic image into a grid of tiles
- Have a scoring/measuring method for each tile
- Have a placement algorithm
Will is using all the thumbnails from the contest and a given target image. He then divides the target image into a grid. Next comes the important part, how do you measure how well a thumbnail matches a target tile? He is using a clever per-pixel RGB matching technique. The closer the color matches, the higher the score.
The final (important) step is placement. For each tile he takes the highest scoring thumbnail and assigns it to the tile. If you keep doing this (greedy) you’ll get a good recognisable result:
When he explained his algorithm (in gtalk) it reminded of the problem I encounter in almost all the Al Zimmermann programming competitions. Eventually all these algorithms boil down to search algorithms. You are looking for the best combination of images with the highest overall matching score (correctness/likeness). Instead of using just the greedy algorithm I suggested he’d try randomly swapping a couple of images and checking the score for improvements. This turned out to be a difference of night and day, check it out:
Of course there are still problems with this method, for example: you’ll quickly run into a local optima. Maybe you’ll have a much better image if A -> B and B -> C and C -> A. This will never be reached by swapping two tiles if the individual steps don’t have an improved score. This can be countered by swapping multiple pairs at once and hoping you’ll hit this correct pair.
There are a lot of other ‘smarter’ things you could do, for example always try to put a ‘best match’ on a particular tile and trying to fill the hole it created…. But for now adding this simple random swap is perfect!
Ever since the first time I heard about the Turing Test I’ve wanted to make my own chatbot. It started probably twenty plus years ago, when the only language I could program in was QBASIC. At that time I never got further than:
…. and now? ….
Since that first try (aged 10 or something) I’ve never tried to build another chatbot. But a couple of days ago I read a news article about the Loebner Prize, an annual event that tests the best chatbots in the world against some human judges, and it sparked my interest again.
I started researching the Loebner Prize winners and there seems to be three distinct groups/types of chatbots and algorithms:
- Template based bots
- Crowdsourced bots
- Markov Chain bots
Let me quickly describe how they work.
Template based bots
When somebody talks to the bot it quickly goes through all the templates and finds matches. This particular pattern will for example match:
What is your name?
My name is Chatbot.
These kinds of bots have enormous databases of AIML templates, mostly hand-crafted by skilled bot configurers. Although they work very good, they don’t seem very ‘smart’ and AI to me; so I’m not going to make another AIML-like bot.
One of the best (most human-like) chatbots is Cleverbot. But in my opinion it is one of the most simple bots around. It uses a nice trick and huge database. When it encounters a question it doesn’t know/understand, it just ignores it, and stores it. In another chat session, it’ll mimic and repeat that question to another human. The result is stored in the huge database for future conversations. As a result all the answers it gives are very human-like (because, well, they are human answers).
But there is obviously a huge drawback, for example at one moment the bot is pretending to be a 18 year old male, the next moment it claims to be a 40 year old female. Then it starts talking about how much it loves horses, the next moment it says it hates animals…
Markov Chain bots
To keep this (long) blog post within reasonable length I’m not going to elaborately explain how Markov Chains work. Markov Chain bots they store words in Markov Chains, let’s for example say we store chains of length three.
Now let’s imagine we are building a valid reply to a question and we already have “let’s store”… what can we do next? We go into the chain and walk the nodes until we find the two matching results (1) and (5). So the sentence can continue with “this” or “it”.
Famous bots like MegaHAL kind of work this way (see their explanation). Although this feels much more like real AI/knowledge to me, it also has drawbacks. For example you can’t say these bots are reasoning, they don’t understand the environment/context it is in.
A new attempt
I’ve made a list of what my ideal chatbot should be:
- Learn through conversation/reading text
- Not just repeat, but understand relations and concepts
- Have different scopes, global knowledge, conversational scope
Two days ago these ideas started to take shape in my head and I started writing the first code. The first goal is to make the bot able to read text and extract ‘knowledge’ from it.
The first thing I had to do was to break up the input text into pieces, for this I found a great open source framework called Apache OpenNLP. It recognises words and sentences, it detects verbs, nouns, pronouns, adjectives, adverbs etc.
The next thing I wanted to do was to turn all the nouns and verbs into their ‘base’. When storing relations in the bots memory I want to avoid duplicate entries, so for example “fish are animals” and “fish is an animal” is the same. For that purpose I’m using WordNet® in combination with the Java library JWNL.
Currently this is what the bot sees when given some input:
Understanding what kind of words are used and being able to transform them into the base-form will make it easier to store ‘knowledge’ and make sense of the world in the future.
Instead of learning how to use a real graph database (like Neo4J) I decided to build something myself. Normally this is a horrible choice, but I’m in it for the fun and to learn new skills. Although it is yet another distraction from the actual bot, after a couple of hours I’ve got the following unit test working:
There are already more facts possible to extract from this ‘knowledge database’ than just the plain information I put in.
The next step in making my bot is going to be data extraction. I’m probably going to make a small template language that (might!) look like this:
The template should match sentences like “fish are animals”, “roses are red”, “Roy is a handsome dude” and “Roy is an obvious liar”.
The bot should be able to store all these ‘facts’ and put them in my graph/relation database. With these data extraction templates it should be possible to build a large knowledge base with facts for the bot, for example just by parsing a Wikipedia dump.
Now I’m going to dive back into the code and continue my chatbot adventure, keep an eye out for part 2!
This week I’ve started running/jogging, and I’m using RunKeeper on my iPhone to track my progress.
RunKeeper stores all the GPS data from your runs. This data is displayed per run, including a nice map of your route. Most important data can be reached throught the web interface, things like pace (min/km or min/mile) and distance. The most rewarding thing in running is breaking your own records, and RunKeeper has a couple of records:
- Longest run (distance)
- Longest run (time)
- Most calories burned in a run
- Fastest average pace
As you can see, all those statistics are about single runs, what I’m missing are the following records:
Fastest 1 KM, 3 KM, 5K, 10K etc.
For example, when I’ve run a very fast 5.5 km run, I’d love to see this reflected as my 5K personal record, but right now it is lost because I’ve already done a slow 6 km run and a very fast 1 km sprint.
But luckily RunKeeper has a very useful option for us developers: Settings > Export data.
This results in a ZIP file with GPX files, raw GPS data with time and location!
The first thing I did was download the XSD and generate the JAXB Java code:
Now I can open the GPX files in Java and extract the GPS locations and time like this:
The next thing I did was translate the lat+long+time waypoints into ‘legs’ consisting of meters+seconds.
Those legs can be used to check for a record during longer runs, like this:
For example using a target of 5000 meters in a 5070 meter run, this analysis finds the following 5K times:
The information from RunKeeper website is:
- Distance: 5.07 km
- Time: 30:38
- Pace: 6:03
But when analysing the data more accurately, it could have said:
- New personal record 3K: 00:16:46
- New personal record 5K: 00:29:55
I couldn’t believe this feature wasn’t available in RunKeeper… but after a lot of Googling it turns out a lot of other people are looking for this ‘most requested’ feature! With a little bit of Java (100 lines of code) you can get a pretty good result:
A couple of months ago I was in the Eifel area in Germany. Surrounded by forests and hills, completely dark, I looked up to the sky and the view was amazing. The milky way was stunning, and there were so incredibly many stars…
That is when I decided: I want a telescope!
Sky-Watcher Heritage 130p
After a lot of searching/reading I decided to go with an entry level Dobsonian telescope design. This type of telescope is usually cheaper to make, and easier to scale up; so for not a lot of money you get the most light-gathering which is very important in telescopes. It doesn’t have the convenience of a GOTO mount, which automatically points your telescope to interesting stars and star clusters, so I’ll have to learn to navigate on my own.
What I bought was the Sky-Watcher Heritage P130, a telescope which has a truss-tube design so it is easier to carry around and take with me to Germany.
After waiting for three weeks for clear skies (..sigh..) I had already hacked together a webcam so I could take pictures. This is how I created the following image of the moon:
(Sky-Watcher Heritage 130P, modified Microsoft Lifecam HD-3000, stacked using RegiStax 6)
After playing around with this setup I decided I want to have 2x Barlow (which increases magnification 2 times again). I settled on an Ostara 1.25” achromatic 2x Barlow lens, which also has an T-adapter with EOS T-ring for my DSLR camera. Basically this turns my telescope into the lens of the camera. I’m very pleased with the results, the Barlow itself is great, but I’ve ditched the webcam completely and now only use my DSLR for astrophotographing.
Initially my results weren’t very good using the Barlow, the increased magnification pointed out another problem with my telescope: It needed collimation. Every once in a while the mirrors of a telescope need to be realigned. There are a lot of ways to do this, but the easiest and cheapest (free) method is using a DIY collimation cap. It is a cap with small hole in the middle, which I made from the Barlow dust cap. Now you can point the telescope to a bright wall and align the mirrors. This made another huge improvement to the image quality.
Initially when looking at Jupiter I could only see a disk with 4 planets in a row. But after collimation I can clearly see the bands on Jupiters surface. And after attaching the DSLR and filming a bit on magnified 640x480 (which results in a better picture after stacking then using 1080p for example), I stacked everything using RegiStax, this is the result:
(Sky-Watcher Heritage 130P, 2x Barlow, Canon 550D 640x480 cropped, stacked using RegiStax 6)
I’ve had a lot of colleagues in my career, some good, some bad, some absolutely fantastic. In this post I’ll go into some of the common traits that make them fantastic:
- Be inquisitive
- Share answers
- Community awareness
- Programmers pride
- Embrace laziness
- Spatial visualization
1: Be inquisitive
Always be inquisitive, if you encounter a problem the first thing all the great programmers do:
This will bring you to StackOverflow questions, news groups, mailing lists, online documentation and more. This sounds very obvious but I’ve had a lot of colleagues that first asked a colleague instead of Googling. This is a bad habit in my opinion. Google has a lot more information than all your co-workers combined, there is no need to force a costly context switch upon your colleagues if you’ve got access to do greatest collective knowledge base.
Most of the time the first hit on Google doesn’t provide you with an answer, don’t give up…! Keep on looking, try different search queries, dive into the source code. If non of the resources listed above yield an answer, ask the community, post a question on StackOverflow, ask on the mailing list or newsgroup.
2: Share answers
Once you’ve found the answer you’ve been looking for, done! Right?
WRONG! This is where you can distinguish yourself from the average programmer. You’ve got something important left to do.
The longer it took you to find the answer, it’ll also take other people the same time to find the answer. Backtrack how you came to the solution, write it down if necessary. Have you encountered your problem/question on StackOverflow? Forum posts? Newsgroups? If so, go back there and tell them about your answer. If there wasn’t any question but you just used the wrong search queries to find something, write a blog post about your solution in your own words. Other people might find it easier thanks to you.
I’ve even had cases where I asked a StackOverflow question, found the answer myself, and answered the question myself. This sound silly, but helps other programmers! [example]
3: Community awareness
It turns out there is a bug in Open Source framework N… what does a good programmer do?
Two possible answers:
- Complain, find another alternative framework
- Download the source, patch it
It turns out both are wrong. Open Source is only possible because there is a community willing to fix problems. So answer number one is an obvious mistake. You picked the framework with care (presumably), encountering a bug is rare and a great possibility to help the community.
So what is wrong with the second answer? Instead of directly downloading the source and creating a path, first get in contact with the people behind the framework. One famous story is that Linus Torvalds (from Linux) rejected a beautifully crafted piece of code (which would greatly improve the speed of the Linux kernel) just because the author worked on it all alone for 6 months. First discuss your possible fix with the community, work together!
Also, an open source project isn’t dead because nobody has fixed your bug; it is dead because you didn’t fix the bug.
You are the community!
4: Programmer pride
Programmers should be proud. Once you’ve made something you can’t wait to show the result to the clients and test your take on their ideas. If there a possible bug in a piece of code you wrote, touched or reviewed? Jump in there and help your colleagues!
But beware, there is good pride and bad pride. Bad pride is when you think a code review isn’t needed because you‘ve made it. Bad pride is thinking you’ll solve it alone instead of pair programming. Good pride is being proud of what you’ve made and feeling the shared ownership and responsibility. Be proud of your team, your product. Be passionate, but don’t be proud of yourself. Embrace feedback, code reviews and pair programming!
5: Embrace laziness
In almost all professions the best employees are diligent and hard working. Except programmers, the best and most celebrated programmers are lazy. It is probably the only profession where it is considered a good trait (except maybe mattress and pillow testers).
Perhaps the best-known lazy contribution to computing was the invention of the compiler by Grace Hopper in 1952. She did this, she said, because she “was lazy” and hoped that “the programmer may return to being a mathematician”.
If a programmer is asked to do something on a regular basis he/she will instantly try to come up with a way to avoid it. Script once, run many times. This is also true for our clients, if a client is talking about repetitive work it starts to itch, we programmers instantly feel the urge to eliminate this and automate the tasks.
Bill Gates once said: “I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it.”
Embrace the laziness, automate repetitive work, don’t repeat yourself.
Also: Lazy people will quickly learn keyboard shortcuts, like every good programmer should (!!)
6: Spatial visualization
To navigate an entire code base you’ll need a mental model in your head. To do this you need spatial visualization ability. Mental models in programming are usually called ‘codespace’.
Using spatial skill as a programmer works in two ways, first is knowing the entire code base. When you have to make a change or addition to some code you’ll need to know where to look. Where do things happen, how does the flow between objects/instances go through the application. Which parts interact with each other.
Second, you have to recognise the code you are working with (on source/text level). Most of the time you switch between multiple files and the best programmers I know are always aware of their location inside the file they are working on. Just by looking at the text outline/structure they can quickly navigate inside source files. This also requires great spatial visualization ability.
There is also research being done about the link between programming, spatial skill and sex differences. It’s been known that men and woman do have different spatial visualization in the brain. Maybe the current method of teaching programming or navigating in IDE’s is better suited for males than females?
More sources on spatial skills and programming:
- Spatial skills and navigation of source code
- Spatial ability and learning to program
- Using Sex Differences to Link Spatial Cognition and Program Comprehension (interesting but provocative idea)
Last week a manager went to an artist for a new painting in his living room:
M: “Hello Mr Artist. I’d like a painting from you”
A: “Hi Mr Manager, what kind of painting do you want?”
M: “It needs to be 100 cm by 50 cm, for in my living room”
A: “Yes, I see, what do you want on the painting? I do abstract art.”
M: “Something with a bit of green and blue”
… some more details …
M: “When are you done with the painting?”
A: “Probably in three weeks!”
M: “Probably? I want to know exactly when it is done!”
A: “Well, oke, for you I’ll have it done in three weeks”
M: “Why does it take you so long?”
A: “Huh? What kind of question is that?”
M: “Well, I can paint that canvas in under an hour! Why do you take three weeks?”
A: “Sure, I can fill the canvas with color in an hour as well, but that isn’t a painting is it?”
M: “Why not?”
M: “Can you do it in two weeks?”
A: “You’re drunk, go home.”
As promised, here is my write-up for Devoxx 2013, the best Java conference in the world (maybe on par with JavaOne).
Having work to do I arrived tuesday evening, day zero.
For the second year in a row there was no line at all picking up the awesome wristband with NFC chip. This already is much better than most conferences which have lanyards with huge badges (yes, I’m looking at you JFall!). The Devoxx NFC chip is used by sponsors to get your email address and it is also used to check off lunch and the party to the Noxx. There is also a big sign in front of every conference room, there you can either give “Thumbs up” or “Thumbs down” after you’ve been to a session, with the NFC wristband. If there is no speaker presenting the huge movie theater screens show a tweet-stream, upcoming talks and the ‘best’ voted talks of the conference. A new feature this year was the middle ‘star’ in front of each conference room. If the queue of people gets too large you could ‘star’ a talk. When it becomes available online at Parleys it will automatically be suggested to you online!
The speakers dinner was, like every year, very well organized. And it gave me the opportunity to speak to the brightest minds in the Java community. It was really inspirational and eventually led to change my online identity.
The first day of the conference. Up too early to film some stock footage before all the people arrive. One advantage was the empty tram from Antwerp city center to Kinopolis.
The start of the opening keynote had two guys (calling themselfs Meta-eX) where making amazing live music… with code! They used live-emacs to program Clojure code, which in turn used Overtone to play music in SuperCollider. Using not one but two Monome sequencers they made sequencing visible, displaying Clojure code with a beamer in the background.
After that Stephan Janssen gave us the latest Devoxx statistics, again maxed out with 3500 attendees (Devoxxians). He also talked about Devoxx4Kids with Daniel De Luca and they got a little present from Aldebaran Robotics: a Nao robot! He also talked about a substantial donation from Oracle for future Devoxx4Kids events!
The final part of the keynote was a demonstration by Oracle, they made a cool chess playing robot using multiple Raspberry Pi’s. After talking about the upcoming internet of things they looked at the history of the Java language and some bold design decisions under the hood, comparing the language to a wolf in sheep’s clothing. When the language Java was created it was pretty standard, looked a lot like C, but for blue collar programmers. But the technology under the hood was state of the art, not yet proven concepts. Today we take garbage collection, virtual machines for granted. When Java was created it was new and didn’t perform yet. The gamble payed off though, garbage collectors, JIT compilers and smart virtual machines caught up and now often outperform static C++ programs.
The Habit of Highly Effective Technical Teams
The next talk I could attend (had to do some networking and filming) was by Martijn Verburg. Lately he hasn’t been able to do a lot of programming but he’s working hard on his lean startup company and focuses on teamwork. In this talk he compared great agile teams with the Teenage Mutant Hero Turtles and other fun stuff. While actually making great points, for example: Having a shared goal. Like the TMHT, whom all fight for justice, your team has to have a shared goal as well. His talks are always full of humor but also have obvious and not so obvious rules, you’ll learn and laugh! This is exactly the reason our company (JPoint) flew him over for a day of hacking and a free talk for JUG members.
Teaching computer science with music
After being amazed by the Meta-eX performance during the keynote I decided to attend this talk by the programming member of the two man collective, Sam Aaron. He talked about his project Sonic Pi, a simple way to teach kids how to program. The main advantage of using music is the instant feedback during live coding. It was a nice inspirational talk about teaching and live coding music, and Sam Aaron is a great public speaker with huge enthusiasm.
Is it a car? Is it a computer? No, It’s a Raspberry Pi JavaFX Informatics System
Simon Ritter talked about using embedded Java with a Raspberry Pi to communicate with his Audi S3 to create a in-car information system, complete with touchscreen and accelerometer. After accessing the car’s CAN system he repurposed the steering wheel buttons for hands free control! A tantalizing talk about some real hardware hacking.
Hacking your own band with Clojure and Overtone
The final talk of the day was again with Sam Aaron, this time with his other band member Jonathan Graham. This time they explained how they created Overtone and other tools needed for their live performance like emacs-live. During the talk they often played some music making it a fun interactive experience. People asked questions and it was the perfect end of day one. The only drawback was Sam’s endless enthusiasm, every time Jonathan started a sentence he was interrupted by Sam… poor Jonathan. :-)
For the second time our company sponsored free beer at the famous ‘Beer Central’. The whole evening everybody was free to order any of the 300+ beers on stock! It was great fun and I caught up with a lot of (ex-) colleagues. The only drawback was getting back in the hotel around 06:10 after a wild night.
For no obvious reason I decided to skip the keynote this morning.
Programmers are way cooler than musicians
Another talk about music, this time it was Geert Bevin talking about and playing on his eigenharp. It was a fun talk but I didn’t like it as much as most other people, it turned out to be one of the highest voted talks of the conference. Maybe it was the sound (which sometimes reminded me of the theremin, which I hate), or the fact I’d already been to two talks about music.
Java EE 7’s Java API for Websocket
Ah, a serious talk about Java EE by Arun Gupta. This for me was a bit of a disappointment. The talk itself was good, gave a lot of insight into the new features that you get from Websockets. But I got soon distracted. It reminded me about the small fight I had in 2009 about the new Servlet specification. The new websocket API again uses a LOT of annotations and yes, I understand there are some advantages for using them, there are also disadvantages. All the examples Arun showed used the annotations (and not the interfaces which are also there, yay!). For example:
The reasons I don’t like those annotations, and rather was to see adapters:
- Interfaces are easier for code completion, ctrl-space knows what can be done
- Testing is easier with interfaces
- Less error-prone for beginning programmers (what happens on two OnOpen’s? Which annotations can I use? What can the method fingerprints look like?)
- Annotations are for meta-data (like mapping!), not for describing code flow
During the sessions I tweeted about this and at the end of the sessions Arun opened the twitterfeed to check for comments. The only thing he said about my comment was “Roy, there is an Oracle document which describes when to use and not to use annotations, it is very clear, I’ll send it!”.
And here are those rules:
- When defining an API that an application is going to implement and the container is going to call, use an interface, unless one of the following rules applies.
- If an application class is providing an API that’s exposed to other applications, and that class also needs to provide methods that the container will call for lifecycle functions (or other container support or management functions), use an annotation to mark those methods so that the application has flexibility in the choice of names for those methods.
- If an application is going to expose an interface that another application (or user) is going to use without Java, use annotations to mark the methods that correspond to this interface. This avoids the need to define a Java interface that’s never going to be used by anyone other than the one class implementing the interface.
- If there can be more than one occurrence of a method for a given purpose, use an annotation.
- If an interface is chosen and the interface has many methods, consider providing an “adapter” class that implements all the methods with nop implementations. Applications can then subclass this adapter class to avoid the need to implement all methods. It’s often better to decompose the interface into multiple smaller interfaces.
(source: Annotation Rules)
As far as I can judge the first rule applies and non of the following rules break that.
Architecting Android Applications with Dagger
This talk surprised me! To be honest, the only reason I went to this talk was to have the best seat in the house for the JavaPosse… But I learned about a great framework. There is a problem using dependency injection frameworks in Android. The introspection and reflection is very slow and often causes the startup time to be 30% higher than it should be. This is why some of the great minds behind Google Guice invented Dagger. It originates from DAG-er which stands for Directional Acyclic Graph. It uses a separate compiler to analyze your code and generate new optimized injection code. This has two main advantages, if there is a problem in the graph it’ll show during compile-time not runtime. And second the processing is done at compile-time, so the execution is as fast has handwritten code!
Another year at Devoxx, another JavaPosse LIVE. This year only two of the new posse were at Devoxx: Dick Wall and Chet Haase. But they teamed up with Emmanuel Bernard and Guillaume Laforge from the french programming podcast ‘Les Cast Codeurs’. It was the very first sessions at a conference I’ve seen where I left a little dumber, but I had a great time doing it.
Devoxx4Kids: Best Practises
Absolutely the best talk of the conference! Nah, just saying that because I was in it :-). During this talk Daniel De Luca, together with other Devoxx4Kids organizer (myself included) showed tools and best practises everybody can apply to get children into programming (your own kids, or a Devoxx4Kids session).
It was great fun, and only if one person in the audience decides to organize a Devoxx4Kids session, it was an absolute success.
BOF: Lessons learned from Devoxx4Kids
Instead of answering questions after the session we all went into a BoF room for an informal meeting. We had a lot of people talking about there experiences with children, from Arun Gupta teaching programming Minecraft to 200 kids to parents talking about their own experiences. Inspiring!
Devoxx movie: The Counselor
It wasn’t the best movie I’ve seen, it wasn’t the worst. It had the most crazy sex scene I’ve ever seen in a movie (catfish?), but also absolutely no interesting story line. It was very ‘american’ pointing out the obvious things instead of leaving anything to the imagination, which I didn’t like at all. Also, be prepared to leave the theater with a depression, no happy end. With this depression I didn’t go to the Noxx afterparty (where Meta-eX did an amazing gig!) but me and some colleagues decided to walk 4 km to the hotel, to reflect on life.
Sorry for the incredibly long blogpost, I’ll keep the end short! The final (half) day at the conference I spend talking to a lot of people instead of going to the talks. I wanted to go to ‘Taming Drones: How Java Controls the Uprising of the Drone Force’ but it was full…
Introduction to Google Glass
The only talk I did attend was by Alian Regnier, one of the few GlassExplorers in Europe, he gave us his insight and shared his experiences with Google Glass after owning it for 6 months. It will no doubtfully infect us in a couple of years. He skipped over most of the code and instead talked more about his experiences as a Google Glass wearer. It was interesting to hear, but not that informative.
This Devoxx for me had a lot of: Music, Embedded devices (Raspberry Pi) and some Java EE. The organization was perfect as always (although I’d rather see a sandwich instead of the horrible salad). I still love the rock-and-roll bracelet with NFC chip. There wasn’t any big news, which was a bit of a shame. After announcing Devoxx FR two years ago, and Devoxx UK and Devoxx4Kids last year… it felt like 2013 was lacking something… I just don’t know what.
Also check out this amazing analysis of the conference tweets: http://blog.florian-hopf.de/2013/11/devoxx-in-tweets.html
This week I was at Devoxx 2013 in Belgium. Like the previous years I helped film the event. Together with other a couple of Devoxx4Kids organizers we also did a session and an informal BoF talk. I might do some more posts on the actual conference sessions later, lot of cool things to talk about, from making live music with Clojure and OverTone to flying quadcopters with digital bracelets.
For me the biggest eye opener was during the pre-conference speakers dinner. Three times during this dinner/reception I was talking to someone and only after 30 minutes they suddenly realized they already knew me… online, not in real life. Because I’ve always used a crazy blue avatar and the anonymous ‘redcode.nl’ URL, people can’t possibly connect the face to the online presence. That is when Tasha Carl suggested to me to use an actual picture instead. (just noticed she also doesn’t have a recognizable picture, the irony!)
When I got to the hotel that evening and fired up my laptop I decided to quickly bridge the gap between my digital appearance and real life. I’ve now changed my avatar at Twitter/Facebook/Vimeo/LinkedIn etc to the image above (the iconic blue creature is still there!). Also as you might have noticed I’ve decided to switch my complete website to my other domain: royvanrijn.com. All the links online that point to redcode.nl will still work, but all the links on that page point to the new domain.
The following day I was walking around on the conference floor when a woman approached me. She asked “Are you Roy?”. It turned out I replied to a tweet she wrote earlier that day and recognized my face from the image. Confirmation couldn’t be more instant!
The Devoxx theme this year was “Reborn at Devoxx”, at this is absolutely true for me, my digital identity is actually reborn!
Since the introduction of EJB 3.1 we can use the @Asynchronous annotation. It provides a simple way to create a new asynchronous process in your application.
If a method with the @Asynchronous is invoked it will spawn a new thread and return immediately. The invocation can return either a void or a [Future
To find out more about how to use @Asynchronous and Future, read this page from TomEE.
The problem I’ve found with this annotation is how it handles RuntimeException. If you have a void as return value, and inside the asynchronous method a RuntimeException occurs, it’ll be completely swallowed. Nothing will be send to your logs. This is something I couldn’t find in any documentation.
If you create a new Thread yourself (see example below) it will print the RuntimeException:
When executed you’ll end up with the exception nicely printed in the console. But this isn’t the case for @Asynchronous. For example if you do:
This exception is swallowed, never to be seen again. Because this wasn’t what I expected it took me much longer then needed to find the actual bug!
To fix this problem you can instead return a Future and call get(). When calling get() on a Future, and the asynchronous method ends with an exception, it’ll immediately throw an ExecutionException which wraps around the asynchronous exception (call getCause() to get it).
But calling get() basically makes the call synchronous again instead of the fire-and-forget I wanted to have. So instead I’ve now ended up with a big try/catch block around the entire @Asynchronous method. It feels a bit wrong… did I miss something? Is there a better method to log the RuntimeException from @Asynchronous methods?
In Java there is one golden rule, we don’t break backwards compatibility. Not in the JVM interpreter/compiler and not in the JDK classes.
With the introduction of Lambda’s in Java there was a suddenly a big need to extend some of the Collections interfaces. And this is something that breaks backwards compatibility. If Java wants to add List.forEach all custom implementations of the List interface won’t compile anymore. This breaks one of the most treasured rules in JDK development, the backwards compatibility.
To counter this the ‘default’ keyword is introduced. Just like an abstract method in an abstract class you can now add some default behaviour to interfaces. This means for example that the List interface can be extended with default logic which can be overwritten (but doesn’t have to be). For example, see the following new ‘default’ method from the Iterable interface:
There are some important rules; the added code will operate on static data, it can’t and should never manipulate state!
Some more info, examples and other solutions can be found here.
Possible future problems…
There is a new problem that will become larger once there are more and more extentions as default methods. There is a possibility of colliding default methods. This will still break backwards compatibility. For example the List interface and the Set interface both have:
Now (crazy example) I’ve made something called MyListSet which implements all methods from List and also all methods from Set. This compiles fine in every Java version but will suddenly fail against Java 1.8. Both List and Set have the same spilerator() on the same level and will thus collide with the following exception:
The more these ‘default’ methods are going to be used the higher the chance collisions like this will break backwards compatibility in the future. Personally I’m not yet convinced about the default methods.
A couple of days ago a our JPoint hackathon we discussed building (Adopt) OpenJDK. After finding out a better way to build OpenJDK on Windows (read it here), I’ve made my first improvement to OpenJDK.
Where do we get started? Actually it turned out it is fairly easy to make this ‘improvement’. We just need to find the correct source file and do a build as described here.
Instead of the HotSpot core (which is all written in C) the JDK libraries are just classes/Java files. If you want to look at these Java files you’ll need to browse to [openjdk]/jdk/src/share/classes. There you’ll find familiar directories/packages like “java.*”, “javax.*” and even “sun.*”.
The file we need to change to improve Random is of course: [openjdk]/jdk/src/share/classes/java/util/Random.java
Now we browse until we find:
Now we need fix the obvious error and turn it into:
And after the build I pointed Eclipse to my newly generated JDK: [openjdk]/build/windows-x86_64-normal-server-release/images/j2sdk-image.
Next I run the following code:
This is absolutely not a valid patch but it really shows how easy it is to modify the JDK itself! There is a lot of low hanging fruit in the JDK, from missing unit tests, to unused imports to classes that don’t use generics yet. For more things to hack on, please read: https://java.net/projects/adoptopenjdk/pages/WhatToWorkOnForOpenJDK!
Tomorrow I’ll be enjoying an OpenJDK hack session with Martijn Verburg (aka The Diabolical Developer). To prepare for this session he told us to follow the AdoptOpenJDK build instructions.
Most cool developers today seem to be using OS/X, but some of us are stuck on Windows laptops. I actually choose to stick with Windows 7 because every single client I’ve worked for has Windows workstations and only require the application to run on Windows. But anyway Martijn said: “Getting OpenJDK to build on Linux/Mac would be easy, Windows can be dicey”
With a bit of Googling and some small problems I’ve got it working just fine on my Windows 7 (64 bit). Most information I got was from this write-up, but I encountered some problems and could skip some steps I didn’t need.
All the tools mentioned are free, but you will have to install some Microsoft Visual C++ packages to compile (which most Java programmers try to avoid).
Windows SDK for Windows 7.1
Instead of the blogpost above mentioning this step second, I recommend doing it first. If you install VisualC++ 2010 Express first this step might fail with some weird error. The solution? Uninstall VisualC++ 2010 Express and first install Windows SDK for Windows 7.1.
So go here and install: http://www.microsoft.com/download/en/details.aspx?id=8279
Microsoft VisualC++ 2010 Express
Next install VisualC++ 2010 Express (beware, Microsoft tries to install another version, pick 2010 Express):
And next Windows Imaging Component (64-bit):
Cygwin (64 bit)
For some reason I ran into problems early on using the 64 bit Cygwin, so I decided to install the 64 bit version as well. This worked, so I recommend doing this.
During the installation you’ll need to add some development packages:
I might have forgotten one or two, this will probably popup during the ‘configure’ step below. If you find a missing package, please tell me and I’ll update the post.
The version of ‘make’ that is packaged with Cygwin doesn’t work with OpenJDK. Instead we need to download the source from http://ftp.gnu.org/gnu/make/. I picked version 3.82 (this one is mentioned on the OpenJDK page). I downloaded and unzipped the source code here: C:\Projects\OpenJDK\make-3-82
To compile, fire up Cygwin and type:
Now you could take the generated ‘make.exe’ and place it into the Cygwin bin directory, but this isn’t needed.
Next we need to have Freetype, this step is done exacly like descriped here. Only do the ‘Freetype’ chapter generating the lib and dll.
Once you’ve generated the lib+dll, make a new directory (I used C:\Projects\OpenJDK\freetype). Add the ‘include’ directory from the freetype source code used in the step before. Also create a ‘lib’ directory and place the generated lib and dll in it.
The other article mentions you’ll need to install TortoiseHg, Apache Ant and a current JDK 7 (as bootstrap). I didn’t have to do this because I already had all three installed. But please go ahead, they are probably needed:
Getting OpenJDK sources
The other article mentions a lot of PATH requirements that need to be set. I didn’t encounter this at all because I configured my build in another way. First we’ll need to get the OpenJDK source code.
This will create a new directory: /cygdrive/C/Projects/OpenJDK/jdk8_tl with all the sources!
Now we need to configure a build for OpenJDK. Instead of just calling ‘./configure’ we’ll need to add a few extra options, we’ll need to point it to the correct MAKE directory and we need to include our custom build freetype.
This will throw a lot of warnings, but in the end it should print something like this:
After the configure step you should have a new build directory added.
Mine was located at: /cygdrive/C/Projects/OpenJDK/jdk8_tl/build/windows-x86_64-normal-server-release
The next thing I did was to build everything, this can take anywhere from 10 to 40 minutes, my build took 20 minutes.
After a long wait and a lot of warnings and messages it says it created your very own JDK 8 build. Time to give it a try:
Like I said before, I encountered some problems and errors along the way. Most involved the wrong Cygwin (32 bit), this caused bash to crash (STATUS_ACCESS_VIOLATION) during the configure phase. Also I ran into a problem installing ‘Microsoft SDK for Windows 7.1’ which required me to first uninstall Microsoft VisualC++ 2010. Another problem was not installing ‘diff’ in Cygwin (diffutils) so the build found some other diff.exe (from Git?) which gave differences during the actual build causing it to stop.
I might have forgotten to write down some step, if you encounter any problem (and solve them) please tell me so I can update this post!
One thing I should add is ‘ccache’. This tool greatly improves the build speed because it caches all unchanged classes/files. It was in Cygwin (32 bit) but it is missing in Cygwin (64 bit)… I’ll have to compile and install it myself. This is one thing I haven’t tried yet, but I probably should do!
Update: I’ve tried compiling and using ccache-3.1.9, but this broke the build. I might try different versions but for now I’ll just skip ‘clean’-ing altogether :-)
In my career I’ve seen a lot of misconceptions about Agile. And I’d like to do a step back and explain what being Agile means to me.
We ‘do’ Agile
This is the biggest misconception of all, and the main reason for my blogpost. Again and again I’m hearing people/companies say: “We do agile”. And this is just wrong… Agile isn’t some system you ‘do’. It isn’t a selection of things you can learn from a book. It is just a mindset and all the other stuff are consequences, let me explain that!
The meaning of ‘Agile’
If you look up the word Agile in the dictionary this is what it comes up with:
- ag·ile adjective \ˈa-jəl, -ˌjī(-ə)l\
- able to move quickly and easily
- quick, smart, and clever
Agile means you’ll move quickly and easily. That is it basically, everything you do when you are agile boils down to:
Start quickly, get something done quickly, expect changes in movement and go with it easily.
For example, in software, expect every bit of code to change (more than once). This will automatically make your code easy to read, extend and refactor. Not only code will change, the requirements will change. Nobody knows what a system will look like in two/three years (those who say they do are wrong), we might have a good idea what to make at this movement, but once you get started people will want other things… this always happens. Stay open to these changes in your movement, be agile!
Agile, Scrum, we need to do stand ups!
Ah, the stand ups, you’ve been reading a book and it told you that you need stand ups? This is a requirement to be ‘agile’? WRONG. Like I said before, it is all just a consequence. Why do we do these stand ups? Because you need to know what everybody around you is doing at this moment. In waterfall projects you already know what people would be doing weeks (even months) in advance. But if you embrace the agile mindset and accept all these changes you might not even know what you’re doing tomorrow. The more you are open to changes and quickly move between tasks, the more often you’ll need to update your colleagues. That is why some projects do stand ups, a consequence of being open for changes.
We use whiteboards and post-its, we do agile!
If your design and the requirements don’t change, write everything down with a pen, or maybe put it in some Software Architect solution (Rational anybody?) or just inscribe it in a stone tablet. Instead, if you are agile and move quickly, designs and requirements change. You don’t want to go into a meeting with a printed piece of paper which has tasks or designs etc, you’d rather want to have the design on a whiteboard. This will allow you to change it during the meeting instead of re-printing. Whiteboards are perfect to erase and redraw shapes and lines! It is just the best tool for somebody who is agile. Except for one problem, if you write down text and you have to move it… rewriting takes a long time. To counter this we use post-its: write once, stick everywhere! It complements the whiteboard very well.
So do we need to use a whiteboard and post-its to ‘be agile’? NO… Whiteboard and post-its are a consequence of being agile, it is the best tool for the job as far as I know.
If you don’t try new things and you don’t want to change the way you are working (being not agile) you don’t need to evaluate and come up with changes, obviously. But if you are agile, open for change, you need to have meetings on how you can change. As a consequence the retrospective arrives. This meeting is one of the most important meetings if you are agile; you talk about what can be improved in your process: What can make us more agile? What is holding us back?
No agile rules…
I don’t think agile needs a set of rules, you just need people to explain the basic concept of the mindset. The other things most organizations treat as ‘agile rules’ can be considered best practices because they make sense if you are agile; but please don’t just follow these rules.
Learn from others, embrace the agile mindset and come up with your own system… that works for you!