Interruptions are a software developers worst nightmare. This is what you often hear. There is nothing worse than being interrupted while working hard on a problem, while you’re in the zone. You’ll lose your train of thought and the world collapses.
The idea is that in software development you are sometimes very deep into a problem, analysing code:
This orchestrator calls that service, it is managed by this class and talks to that queue. So this generator has these parameters and it all depends on… “hey I still need your time sheet”
Poof… everything gone. Your mind is now filled with rage and lost the entire train of thought. This is the main reason programmers hate interruptions.
Root cause analysis
During the JavaOne 2016 conference I attended a talk by Holly Cummins called “Euphoria Despite the Despair”. In this talk she went in depth into what makes work fun and stressed the point that interruptions should be avoided at all cost.
She even talked about a team that has a dedicated duo that had ‘interruption duty’. A Slack-bot was created so you could ask which colleagues you could interrupt each day.
But… are we judging too quickly here? Are all interruptions bad? I don’t think they are. Time to dig a little bit deeper.
Sync versus async and good versus bad
Interruptions are synchronous and blocking. Someone walks up to you and stands near your desk trying to get your attention. You might have a few milliseconds to finish what you are doing but you’ll need to reply. Other forms of communication are asynchronous, for example mail or chat clients (Slack). Someone asks a question and when you have the time to reply, you reply.
Our team has evolved to use this to our advantage. Some problems ask for direct action, but most can be handled asynchronous. This is something we talk about and discuss.
For example: If my manager wants a time sheet? Ask me asynchronous, it can wait. If a colleague has a question but is able to do other things first? Ask asynchronous, it can wait. If a colleague is stuck and can’t continue to work? I’d be happy to drop what I’m doing and assist! This is what teams do, they work together and communicate!
Don’t throw all interruptions on one big pile and label them as bad, distinguish between the good and bad interruptions.
Mount the headphones, get in the zone
Sometimes, as a programmer, you just want to concentrate and solve some deep problem alone.
In our team this can be done in two ways:
- Work at home or in a small dedicated room in our office
- Put on your headphones, if they are on, you signal you don’t want to be disturbed
I do this too, mostly using my headphones. Most of the time I don’t even have music playing and I can hear everything the team says (this is important, more on this below!).
In my opinion though we need to rethink this. Is it really the best way to program when you are ‘in the zone’?
I think you are much more likely to solve problems when you openly share and collaborate. Pairing isn’t just for trivial work, instead: the trivial things are things you should do alone! Teams should have no secrets, at each point in time you should have a pretty good idea what each of your team buddies is working on. Actively ask questions, even trivial things.
Near-subliminal team updates
What? “Near-subliminal” team updates? Yes, I’m so sorry, I can’t think of a better name. Can you? Please tell me.
One thing that is very important in software development is the use of what I’ve just called near-subliminal team updates/mumblings. It is the main reason I have headphones on without any music, and why I don’t like working at different locations (at home or in cubicles).
You are working on something and you feel the need to refactor a class. At this moment you say/mumble out loud:
“Hey team, I’m going to refactor classname because reason okay?”
This is what I call a near-subliminal team update, and they are very important. They can solve a lot of problems!
Maybe you don’t agree with reason, this is the perfect time to talk about that. You don’t want to wait until a code review and tell them it was a bad idea later the same day.
Maybe you are also working on something in classname and it might collide with your work. You don’t want to wait until you git pull and need to merge, tackle it right away!
Maybe it isn’t relevant to you at all (which is most of the time)… Well, most likely you don’t even really notice the update. It didn’t contain any trigger words. Terms you are working on or have an opinion about. You aren’t really interrupted and can continue to work.
- Don’t treat all interruptions as evil, they are not…
- If you are annoyed about interrupts, talk about it
- Speak up, mumble, use those near-submliminal team updates!
Did I miss anything? How do you cope or deal with interruptions?
And please: If there is someone with a better name for these short near-subliminal status updates, let me know!
Lately there has been a lot of rumor going around about the future of Java EE. Oracle ex-employees Reza Rahman was one of the first to voice their concern about Java EE. It seemed that all development on the seperate JSRs (Java Specification Requests) that make up Java EE 8 ground to a halt and Oracle was thinking about stopping Java EE development all together.
Oracle finally gave some insight on their proposal of the future of Java EE during JavaOne 2016 (where I am right now).
What is Java EE?
First, lets take a step back and look at what Java EE actually is. If you download Java SDK/JRE you are able to compile classes and run them. In the language (java.* and javax.*) there are a couple of libraries that you can use. For example if you want to compute mathematical equations there are classes to use in the java.math.* package.
Oracle quickly realized it is hard to make larger enterprise applications using just these libraries. That is why they developed the following things:
- A set of interfaces (API) for Enterprise Applications (Java EE)
- A default server implemention for these API’s (Glassfish)
- A licensing structure for vendors that want to implement these API’s
So if you want to make an Java EE application you can use the Java EE API. For example you can define Servlets for web communication. Next when you deploy this application on an official licensed Java EE server it’ll make sure the methods are implemented and called when a URL is invoked.
Java EE at JavaOne 2016
During the keynote Oracle informed the crowd about Java EE 8. Rumor was that Oracle wanted to completely stop developing Java EE because the demand has been decreasing over the years. More and more ‘fat’-jar applications are being deployed on simple web servers that don’t implement the full Java EE specification. Oracle has to think about the return value, is the effort of developing a new Java EE 8 specification worth the effort? They clearly thought it wasn’t.
The problem for Oracle was the big backlash coming from the community. A couple of initiatives were started to create a voice and support for Java EE (for example the Java EE Guardians and EE Central). This put Oracle in a bad position. If they stopped developing Java EE the community would be very dissapointed. The initiatives would demand the rights to the API to keep developing it themselfs. But Oracle can’t do this because they are earning money from selling licenses.
The decision has been made to continue developing Java EE 8. And during the keynote they proposed a release of EE 8 in 2017. Going forward they’re talking about changing the focus to reflect ‘modern’ enterprise development. They want to add support for virtualization (Docker anyone?) and microservices, modernizing Java EE.
Parts of Java EE, for example CDI and JPA are in my opinion very succesful. Java EE looked at the options in the market and they’ve defined a new, clearer, general, evolved API that allows you to change the underlaying technology/vendor. This is very good! It is always valuable when vendors get together and define a shared API together based on the lessons learned from their own implementations.
The problem is that Java EE 8 as a whole is too large, it is a monolith. If I start a new project I’d love to pick some parts from Java EE and just use those. There is no need to have one huge Java EE certified server if instead you can pick one particular CDI and JPA implementation.
The future of Java EE isn’t to continue as one big bundled EE server. We need to break it up. All the parts can have their own lifecycle and seperate implementations. This makes it easier to get smaller certified implementations and doesn’t require large EE servers. This also allows for pruning, if certain parts aren’t relevant anymore, don’t use it and stop developing.
Vendors like Red Hat (owner of WildFly) are already breaking up their Java EE implementation using frameworks like WildFly Swarm. Swarm allows you to package and run just the parts of the specification you’re using. This is, what I think, the real future of enterprise Java.
The company I work for (JPoint) we don’t have work titles. Well, we do, but you’re free to pick one. Some people call themselfs ‘software developer’, some are having more luck as ‘software architect’, others label themselfs as ‘software craftsman’ and there might be a ‘software ninja/rockstar’ hanging around.
But I have a problem with that… all those terms don’t reflect on what we do. Currently I’m sitting in a session at JavaOne and I’m having the feeling people don’t realize what their job actually is.
#So, what does a software coder do?#
When creating software we instruct a computer what to do.
That is it.
We tell a machine how to respond. We don’t build things (develop), we don’t slay things (ninja!), we don’t rock, we just tell the computer what to do.
That is why a programming language is called… a language. It is a way we, as coders, communicate with the computer.
Is it that simple? Sadly no.
When writing code, we are not alone. There are also other coders that read/write/edit/analyse our code. It isn’t enough that just you and the machine understand, other coders have to understand as well.
We started out programming in binary, but this was way too complicated for humans. So we invented programming languages. First we created assembly. This made it easier for a human to tell the machine what to do. But assembly was still too complicated. Slowly the programming languages became closer and closer to human language.
My job is to write code, code that the machine understands. The code must also be pleasant to read and understand by other humans. Also, in this, I’m putting a lot of creativity.
Maybe you are a software author or software writer:
wri•ter (rahy-ter) noun 1. a person engaged in writing (books, articles, stories, etc.) as an occupation or profession; an author or journalist. 2. a person who commits his or her thoughts, ideas, etc., to writing
This fits our job much better!
If you want something more exotic, and your code has a good flow and shows creativity, you might call yourself a: software poet:
po•et (ˈpoʊ ɪt) noun 1. one who writes poetry. 2. one who displays imagination and sensitivity along with eloquent expression.
This morning I noticed the following tweet by fellow programmer (and runner) Arun Gupta:
Why code reviews are important? pic.twitter.com/8KyMo7Syis— Arun Gupta (@arungupta) August 28, 2016
The tweet contained the following cartoon by ‘Oppressive Silence’, check out their website for more laughs!
Solved by a code review?
The main question I’d like to ask:
Is this really something that you’ll find during a code review?
I think my answer below won’t surprise you, but the reasoning behind it might. First lets look at four reasons this bug should never happen to me (or any other experienced programmer).
1) Static variables/global state
The first obvious problem with the code is the usage of static state. The variable ‘isCrazyMurderingRobot’ is mutable and static, so anywhere in the code the value can be changed. This makes it very hard to reason about the variable and keep track of it. Any programmer can come along and change the value in some method. This is unacceptable, there is almost no reason to use mutable static variables.
Top tip: If you have variables, especially mutable variables, keep their scope as small as possible!
2) Final method arguments
If you solve the global state problem you’ll probably end up with something like this (translated to Java):
Whenever an argument is passed to a method I have the habit (and my IDE-settings enforce this for me) to make them final. Method arguments should never change inside a method. It is just strange if you need to do this when you think about it. It probably means the method should have its own (scoped) mutable variable.
In the code the following would happen:
The code won’t even compile before telling me what is wrong, the world is saved!
3) Static code analysis
Another reason this bug will never happen in my code is because we have static code analysis on all our projects. A program like FindBugs will immediately flag this as a major problem, assigning values inside an if-statement.
Look at this particular check in FindBugs:
Static code analysis is an essential tool in modern programming, it is just so convenient, never reads over certain bugs like a human would (spoiler: more on that in the final conclusion below).
4) Yoda conditions
The final reason this bug will never happen in my code is: Yoda conditions
A habit I learned a long time ago (in older languages) is writing my if-statements like Yoda. Good it is, mistakes you won’t make. The reason behind doing this is that it’ll prevent exactly the bug in the cartoon.
In the cartoon the if-statement would change from: ‘is crazy murdering robot true?’ to ‘true is crazy murdering robot?’.
If you do this, the following happens:
Conclusion / saving the world
So, do I think Arun is right? Will code reviews find the bug in the cartoon? Yes and no.
Humans are good at reasoning, thinking, being creative, finding bugs, but there is one thing we are horrible at: syntax parsing.
During a code review you are just as likely (maybe even more likely) to read over this typo and don’t notice it. A second pair of eyes won’t save the world here just yet.
But: when reviewing this piece of code, I will complain about the global state! If you start to refactor, encapsulating the logic and remove the mutable static variable the bug will be noticed and solved. So yes, if you do a code review, the code would have be been fixed and humans will be safe for just a little bit longer.
Gender equality and discrimination in IT is a hot topic. More and more women are (rightfully) opening their mouth and point out problem areas. There are just some men who just don’t get it, those insensitive idiots, they are the reason IT is such a toxic harsh environment for women.
So I thought…
This is a story about when I realized I am (unintentionally) one of those idiots.
Joy of Coding
This friday I spoke at Joy of Coding in Rotterdam. An awesome conference and a real example for other conferences regarding gender equality. The main organizer is female (Hi Felienne!), she’s the face of the conference, presented the opening and announced the speakers on the main stage. They have a very good rule of conduct: Don’t make any assumptions. Also the line-up of speakers was almost 50/50 male and female.
And of all the conferences: This is the conference I made my mistake.
The last speaker at the conference was Hilary Parker. She’s a Data Scientist at Stitch Fix and talked about how programming in data science is maturing. No longer are the scientists tweaking Excel sheets, they’ve started to use tools and languages (like R) to do all the analysis.
I loved the presentation and I also love the fact more and more women are taking the step and talking on stage at conferences. We should totally cherish them! There were however two small things that I noticed though, first: on two occasions her screen went ‘asleep’ and the had to dash to her laptop and wiggle the cursor. The other thing was, Hilary talks with a very obvious ‘Silicon Valley’-accent. She has the habbit of putting ‘like’ multiple times in every sentence as a filler word.
It sounded like this:
“So like, when you like load all the data files, and like you run a tool like R, you like immediately … etc”. This got a little annoying once I noticed it. It kind of takes the focus away from the content.
After the conference I didn’t have time to talk to Hilary in person. But I wanted to help her, I want to see more female speakers at conferences. So I went on Twitter, send her a DM (direct message) and gave her some unsolicited feedback in private.
What I said came down to: Have you considered installing ‘caffeine’? It is the perfect tool for public speakers to stop the screensaver from kicking in on OS/X. Also, I’ve noticed you are using the word ‘like’ a lot, up to the point it becomes a distraction. I think you could become an even awesomer speaker if you lessen the usage of the word ‘like’.
Her reply was simple, but blew my mind:
Hey, have you seen this? http://larahogan.me/blog/on-unsolicited-criticism/
It turns out that unsolicited feedback almost exclusively happens with female speakers. Reading that article I realized that I never have this. People do sometimes come up to me after giving a talk, but only to say ‘nice talk’ or something similar.
This is the moment I realized I’m biased too. I never thought about this being a problem, but I can understand this can be extremely annoying as a female speaker. It was obviously not my intention to be a jerk giving unsolicited advice…
So why did you do it?
So what is the reason I made this mistake? I’ve been thinking about this for a couple of days now. In Lara’s article she mentions some possible reasons it happens, but I have some additional ones:
Cherish female speakers
If you’re reading this you might think I’m just one of those insensitive men at conferences, but trust me, I’m trying not to be! The last couple of years I’ve been actively involved in changing our community to be a better place for everyone. I’ve helped organize and volunteer at various Devoxx4Kids events. I’ve been in CFP committees and have fought for more female speakers there….
One of the reasons I gave my feedback was because I want to see more female speakers. If a men fails, there are dozens of other men waiting to replace him. I thought that with my advice Hilary could become an even better public speaker, increasing her chance to speak at events and being an example to more women.
Women are more affable?
pleasantly easy to approach and to talk to; friendly; cordial; warmly polite:
This statement is also gender discrimination, but in a positive way. From my experience women are more affable. They are friendlier, easier to approach and to talk to. Male speakers can have a ‘macho’ attitude, I don’t want to go up to them and give them feedback.
This might also be a big part of why female speakers receive unsolicited feedback and male speakers don’t.
For me this experience was a real eye-openener, if you’ve made it this far and haven’t read Lara’s post yet, please do!
Ask yourself some questions. When’s the right time? Is it constructive? What’s moving you to give feedback to them right now? And is it possible that your brain was itched by them not in a bad way, but in a potentially eye-opening way for you? It may reveal some unconscious biases in yourself that would be worthwhile for you to explore
It is very easy to fall into the gender trap. In my case it was a completely honest mistake, I wanted to help. But Hilary did nothing wrong, she talked the way she always does, with a ‘Valley’ accent… so what? Why was this the time I had to give some feedback?
I never thought that unsolicited feedback would be a problem female speakers have to deal with. I’d like to apologize and thank Hilary, she opened my eyes to this problem and allowed me mention her in this blogpost.
So next time you feel the urge to give some unsolicited feedback, stop and think about it…
Most conferences I get to visit are about Java. Java frameworks, techniques, other JVM languages, all of them have pretty much the same focus. However: there is one conference on my agenda that is… different (in a good way!). Joy of Coding is all about (you’ve guessed it) the joy we experience while coding. The topics are very broad, some talks are technical, others are not.
Talking about game theory
At this edition of the Rotterdam-based conference I was invited to talk about game theory and game algorithms, and I explained this by going from a simple game, to noughts and crosses, to chess and finally to Go. From minimax to alpha-beta pruning to neural networks. It is the same talk as I did last week at Devoxx UK, but now I had much more time, 30 minutes. This allowed me to go deeper into the algorithms and for example explain the free optimization you get with alpha-beta pruning.
The absolute best thing about Joy of Coding? The logo/mascot! Just look at this extremely happy cute octopus:
Last week I went to London with my colleague Bert Jan Schrijver for the Devoxx UK 2016 conference. The UK-member of the international Devoxx conference family is smaller than its siblings. The event was three days, with the first two having talks and the last day was filled with hands-on-labs. During the first two days there were four parallel tracks giving the almost 1000 visitors enough content to pick from.
The quality of the speakers is very high in London. There are usually a lot of international speakers, probably due to the Devoxx-branding. But also the LJC (London Java Community) has done a lot to encourage their members to do public speaking, it shows!
Devoxx UK started with one large keynote. James Veitch (stand-up comedian/nerd) entertained us with his experiences with so called ‘nigerian’ spammers. If you’ve never seen a presentation by him, check out his TED Talk for example right now! He’s amazing.
After the entertaining kick-off Mark Hazell (organizer) took the stage. And during his talk the next Devoxx-sibling was announced: Devoxx US. For the first time Devoxx is going to cross the Atlantic ocean. It will take place 21 to 23 March 2017 (during the Java 9 release?) in San José.
This news was followed by two technical keynotes. The first one was by Hadi Hariri (about the ‘free lunch’ in open source software) anf the second talk was by Mazz Mosley (about her Agile experiences). After this the four parallel tracks started. My colleague Bert Jan immediately took the stage with his talk on microservices in Vert.x.
The first conference day ended with the Ignite sessions (5 minute talks, 20 slides, auto-forwarding slides, fun topics), this is where I talked about the coming of Skynet (from the Terminator movies) and the fact that Skynet will run on the JVM.
The second day of the conference has again four rooms with parallel sessions. And during lunch I had my main talk, a 15 minute quickie about game theory, algorithms and the breakthroughs of Google/DeepMinds AlphaGo. The talk takes you from the most simple game you can imagine to Noughts and Crosses, to Chess and finally to Go. It talks about the algorithms you can use to make a computer play these games and why all those techniques don’t work with Go.
The second day ended with the usual Community Keynote. During this keynote some crazy French guy warned us for Brexit, they talked about the problems with Java EE and they did three short interviews, one of them was with me! Martijn and Antonio asked me some questions about Joggling (which I’d done an Ignite talk before) and filming at Devoxx. The most scary part was then Antonio challenged me to juggle three glasses… which I did: (skip to 16:79)
We ended the day like real Londoners do: in the pub. After the last talk we went to ‘DevRoxx’, a party sponsored by Tomitribe, Lightbend, Couchbase and Atlassian.
I’m also the ‘official’ Devoxx UK camera man/editor and I’ve created the following movie:
Yesterday, on the vJUG mailing list, Gilberto Santos asked the following question:
I’m working as Software Engineer and as intend to grow up to architect position, now on I was wondering how to best way to get there , should be :
I replied to him over the email, but it might be something more developers are struggling with, time for a blog post!
Architects should code
There are two types of software architect:
- The one that draws Visio® diagrams
- The one that develops/codes
The first type of architect likes to talk, have meetings, draw diagrams and give orders. This, to me, is not good.
Architects should write code, they should be part of a team. In fact, everybody that writes code is effectively an architect and should have basic knowledge about architecture principles. For example every programmer should know about the SOLID principles.
Simon Brown has some good presentations on this topic on his website coding the architecture.
The best advice I can give a developer that wants to grow into a senior/architect position is:
Make mistakes, lots of them.
If you want to become a better architect the most important skill to master is the art of making mistakes, noticing them, and not repeating them in the future. Most ‘enterprise’ architects I’ve worked with set up a project and leave before the thing they had designed starts to rot, smell and fall apart. They never experience the flipside of what they set up, you can make everything look good on paper.
As a programmer and architect you need to experience this ‘project rot’. To learn you need to feel the mistakes that are in every system. Once you start seeing problems, admit the things you’ve made aren’t perfect, and start changing them. Try to fix the mistakes.
Matt Damon would say: “I’m going to have to refactor the shit out of this.”
This experience is what makes you a real architect. The best architects I’ve worked with knew instantly what needed to change and which solutions would work better than others. They develop an instinct for this. In Dutch we have a saying ‘they know where the shoe pinches’. Which means: just by looking at it they have a ‘feel’ for what hurts and will cause problems (in the future).
This experience/instinct is what makes you a good architect and a good programmer. But sadly this doesn’t directly get you a better job or position within a company. Managers, recruiters and HR can’t easily test for this instinct. Sadly: the thing they look at most of the time is certifications.
For example, I’ve mentioned the SOLID principles, and they are very important. It takes a maximum of 5 minutes to learn the acronym and use it in job interviews or get a certification. To really understand it though you need to experience SOLID. You need to encounter situations where huge classes have a lot of responsibilities. This is the problem with most certifications: they encourage you to memorize rules and principles, not to understand them. I’d rather have someone in my team that naturely breaks up code into clean interfaces than someone who memorized that the ‘I’ in SOLID stands for the ‘Interface segregation principle’.
Once you developed this ‘architect instinct’, people will notice it. Colleagues will remember and this will help you get better jobs and positions in the future. You’ll need to prove yourself and grow your network. Try to get recommendations from people you’ve worked with and you will become a real senior programmer and architect.
This afternoon I started to wonder…
I’ve been a programmer now for 20+ years, but what is the best piece of code I’ve written in all these years?
The first thing that popped in my mind was this. It is a piece of code that can very quickly, in near linear time, generate de Bruijn sequences. I ‘invented’ it after reading a scientific paper that described how to quickly generate ‘Lyndon words’. I knew with Lyndon words you could easily generate the de Bruijn sequences. So I implemented it and adopted it for de Bruijn. Read more about this here.
Why is this my favourite piece of code? Well, it is small, compact and does something impressive. I couldn’t find a lot of implementations that had the fast runtime my code had. And it wasn’t trivial to implement, I had to do quite a lot of research and it felt like I myself did a little inventing. I probably wasn’t the first to write the algorithm though, but it felt like that for a while.
But is it…?
As a programmer I write code every day. And what I do even more is read code. If I have to guess, for each line I write I’ve read about ten times more. And after more than two decades of programming, I’ve seen my share of source code. This taught me something important. Is the piece of code I mentioned the best code I’ve written? I don’t think so.
What I thought the best code should have:
- Looks smart
- Amazes people
- Jumps out
- ‘I could have never created that!’
- It is magical!
The code I picked sure had some of these properties. But is it really the prettiest code I’ve written? No..
When you are working in a large codebase the best (and prettiest) code has the following properties:
- Doesn’t stand out
- Looks trivial
- You don’t notice it’s there
- You hardly ever need to change it
- ‘Gah, anyone could have written this’
The best code is code you don’t notice, code that doesn’t stand out, code that looks like anyone could have written it. It doesn’t contain smart things, it looks mundane. This is something we should always strive for… simple code.
So what is the best piece of code I’ve ever written? It is the code that will never be git-blamed, nobody will ask questions about it and nobody will even notice it exists.
Please sit down, we need to have a talk, programmer to programmer.
Over the last decade we’ve had a lot of problems with authentication. For example, we’ve stored plain text passwords in the database. We’ve learned from this and nobody is doing this anymore right? If you are, please deposit your programming-license in the nearest trash can.
Latest challenge: Biometrics
It is time to talk about the latest problem in IT: biometric data.
Some websites are using biometrics, such as your fingerprint, as your password. This sounds great, very hard to fake, unique to you. But there is a problem… what happens when there is a data leak?
If you store passwords in the database (hashed or not), and they get leaked, it is bad. You need to tell all the users to change their passwords immediately. But what happens when you store biometric data and it gets leaked?
The only way to change your fingerprint is this:
Rather painful… and even worse, all devices and websites that use your fingerprint have the same password.
We don’t want to share passwords on multiple websites/devices!
Not a password
There is no real solution, as long as you ensist of using biometric data as a password. Even if you use a nice salted hash, it will eventually be leaked, with big consequences.
A better way to use biometrics in authentication is to treat it as a username. It is a great match, it identifies you. It is not your secret password, it is your username. That means you still need to provide a password, but having the added biometric username does increase security a lot. Of course if there is a database leak, your fingerprint can still be stolen, but that is the entire point. If you touch a glass door you’re also leaving your fingerprint. Using fingerprints as password is like dropping pieces of paper with your secret password all over the place.
Fingerprints (and other biometrics) are not secure, you can never change them once compromised, not suited as passwords. If you really want to use it, use them as usernames.
On the OpenJDK core mailinglist (and Twitter) there is a discussion about Java’s Optional. Before diving into that discussion, lets take a look at what Optional does and how you can use it.
Checking for null
What do you do when your code calls an external service or god forbid a microservice, and the result isn’t always available?
Most of the time the protocol you are using facilitates in the optional part, for example in REST you’ll get a 404 instead of JSON. Getting this 404 forces you to think about this scenario and do something when this happens.
But what do you do when you’re calling a framework (on the boundary of your code) and the value isn’t always known?
You either get the value or the result is a dreaded null. This causes a lot of null checks, or bugs where the code just crashes with a NullPointerException.
Example (old skool, Java 7):
This code is not very pleasant to read. But we Java programmers didn’t have or need anything better… until we started to adopt a more functional style of programming.
What happens when you are processing a stream and some values are null? You don’t want null checks inside a stream! This is where Java 8’s Optional comes in. If you’re not (yet!?) using Java 8, there are other implementations as well. For example Google Guava has an Optional as well.
Optional is a class that ‘might’ have a given value in it, or not, it is optional. So how exactly is this helpful? Instead of checking for null this wrapper class can handle some situations for you.
Or the shorter ‘fluid’ version:
Even if the Optional is empty in either reading the order or processing the order… nothing breaks. No NullPointerException, nothing, just no executed lambda storing the result in the end. We’ve eliminated the need for a null check.
As you can see Optional can really clean up your code. You don’t need to worry about null checks anymore.
So what is the problem with Optional.get()?
Optional.get() deprecation discussion
The get() method is too easy to find, and the name isn’t quite what you’d expect, and the webrev author claims there are a lot of cases online where people made the same mistakes.
Many programmers, when they first encounter Optional, don’t know what to do. They look in their IDE and the first thing that pops up is get().
It is just an easy method to call:
This works fine! Until there is a situation where the value is not available to the Optional. In that case it will throw an NoSuchElementException. How can we solve this? Well, we could do the following:
This is the ‘safe’ way, but it could just as well have been a null check now. There is likely a much cleaner way to process your Optional.
- If you want to do something with the result, use filter, map, ifPresent (and many others).
- If you need to return something, either return an Optional yourself, or get a default value by calling orElse, orElseGet or orElseThrow.
This is all you need, why have a get-method?
The proposal on the mailinglist is to deprecate the get() method and rename it to getWhenPresent(). This name change should warn people that it might not be present and they should check isPresent before calling get().
Instead of embracing this change some people on the mailinglist argue against deprecation, some of their reasons:
- Renaming will break a LOT of code, well, not really break the code, it will cause deprecation warnings
- getWhenPresent() instead of get() just adds noise to the code, it doesn’t solve anything
- People should just read the JavaDoc, it clearly states what get() does and throws
- Guava’s Optional also has the same get() method, they’ve never heard about the problem
The most honest and one of the more powerful replies in the discussion was from Brian Goetz himself:
As the person who chose the original (terrible) name, let me weigh in…
I’d like to see it fixed, and the sooner the better.
He is clearly in favor of deprecation… what is your opinion? Let me know in the comments!
Recently there has been a lot of discussion about the state of Java EE and Oracle’s stewardship. There seems to be happening a lot. There is the fact that a lot of evangelist are leaving Oracle. There have been (Twitter) ‘fights’ between developers from Pivotal and Reza Rahman. And there are the Java EE Guardians, a group formed by Reza after he left Oracle.
And during the last JCP ‘Executive Committee Meeting Minutes’ the London Java Community (LJC) openly expressed their worries:
Martijn said that while he recognizes Oracle’s absolute right to pursue a product strategy and allocate resources in ways that meet their business interests, the LJC is concerned that the lack of progress and the absence of any explanation from Oracle is doing significant harm to the Java community and ecosystem.
He explained that “splinter groups” are discussing taking over both the code work and thought leadership of Java EE, and that many companies are building proprietary frameworks such as microservices stacks, leading to even more fragmentation.
There are splinter groups forming, companies are building frameworks and stacks without following Java EE or contributing to future Java EE specs. People in the blogosphere/tweetosphere are complaining and worrying about it… but is it really a problem?
In my personal opinion: No.
There have always been companies experimenting, pioneering new technologies, without following Java EE specifications. This is for example how the Spring Framework got as big as it did. Remember however: Spring really shaped the future of Java EE, without it we might still be coding EntityBeans.
I think it might not even be a bad thing for Java EE to take a little break. There is a lot of unproven technology happening at the moment, for example there are the reactive frameworks and everything related to microservices. The landscape is changing quickly right now.
The worst thing that Java EE can do is to come up with their own new standards for these technologies while we, as developers, haven’t really worked out the quirks yet. Historically the best Java EE specs (IMHO) are the ones that came late to the party. But those are built on years of experimentation and crystallization. Those specs looked at everything the market had to offer, brought the relevant groups together and made it work.
So there is nothing wrong?
There is one big danger to Java EE right now. It is the fact people are complaining. If we don’t stop this, it might all become a selffulfilling prophecy.
Instead of worrying about Java EE, lets build tools and frameworks that are worth becoming an official spec. For example, look at the work Stephen Colebourne did with Joda Time. He was fed up with the horrible java.util.Date and decided to make something better. After years of programming and growing a huge fanbase it was finally turned into an excellent specification (JSR-310).
If you look at it, the most important thing Java EE might have done for us is bringing the relevant groups together, share ideas and distill the best practices (and writing those down as specifications). It is exactly the opposite of what is happening right now. I don’t mind splinter groups forming, if they get the right people together and work together towards forming solid specifications and implementations, why not?
I’m pretty sure Oracle (with Java EE) will take a look at the proposals and adopt them.
The most important thing is that we keep working together!
Update: Some people have warned me that I’m being too optimistic. But time will tell, maybe Oracle will kill off Java EE, maybe they won’t. Maybe everything will take a turn for the worse, maybe it won’t.
For now I’ll just do what the Dalai Lama suggests: Choose to be optimistic, it feels better.
After my blogpost yesterday Pros and cons of JEP 286 I’ve received a lot of feedback.
The more I’ve been thinking about var/val it seems that my biggest mental hurdle is the Java 7 diamond operator. The diamond operator is good, it eliminates typing, and I like it… but I have the feeling it could be so much better!
Instead of (or in addition to) adding var and val I’d love to see a solution where we could ‘flip’ the side of the diamond operator.
Look at the following example:
In this case adding var/val doesn’t improve much, we still need to specify our generics somewhere, and now it has moved.
But look at the following example:
The thing is: There is much more to win on the LHS with the diamond operator than we currently have with the RHS diamond. In most cases you’re going to call code that has already defined the typing, in all those cases you can skip it LHS.
If it is possible to add var and infer everything, it should also be technically possible to have a flipped side diamond operator right? Or am I missing something?
Our application was already using JavaMail (javax.mail.*) as a way to inform our users. But for logging purposes we wanted to store all the emails we send in our database (and make them downloadable using our GUI).
It turns out this is pretty easy to do!
Let’s start with some very basic email code we already had in place:
What we need to do now is to ‘render’ the entire email in a binary format, including all the possible attachements, multipart things, from and to headers etc.
It turns out there is a convinient method for doing just that: message.writeTo(OutputStream)
Our own little POJO entity (ArchivedMail) is stored in the database with some additional information that allows us to search the messages. The final step is to make a download link and present the email in a readable format to the users.
We’re using Wicket and thus the following example is Wicket code, but you could just as easily create a Servlet to return the data:
Using JavaMail (javax.mail) it is very easy to get the ‘raw’ contents of an email when sending it.
This can be stored and downloaded in EML-format. It contains everything you need, mime, multipart, attachements and all the from/to headers.
A couple of weeks ago a new JDK Enhancement Proposal (JEP) has been published: JEP 286
It proposes ‘var’ and possible also ‘val’ as a way to declare local variables. This means that for local variables you don’t need to specify the type of your variable when it can be safely infered.
Personally I’m not convinced this is a good idea for Java, but OTOH some of my colleagues and co-workers are very happy with the proposal.
Let’s look at some of the pros and cons of this proposal.
Pro: Less typing!
There is one obvious pro: Less typing.
‘var’ is just three characters, while most other local variables type names are much longer.
Instead of typing int, List, Person or SpringObjectFactoryManagerTemplateProxyDelagate you just have var.
The biggest advantage of Java over other languages is the readability. The language Java is a bit verbose, but this is actually a good thing when it comes to reading the code.
Code is read more than it is written
Consider the following:
What is the type of myVariable? When you are writing the code, you probably have a good idea why you called dependency and what you receive as return value.
But when you are reading the code, there is no way of knowing what myVariable is… you probably need your IDE to tell you, or look at the code of the dependency.
I personally think this is a con regarding the JEP. I’d rather have a verbose language where the IDE helps me with autocomplete and hide things… than having a language that needs an IDE to help me makes sense of the code.
Pro: Adding var doesn’t break anything
Some peope think (and argue) that adding this feature breaks backwards compatibility (because of the new keyword).
But this is not true!
When ‘var’ gets added it won’t be a keyword, it’ll be a ‘reserved type name’. This means that the following code for example would be working just fine:
Con: RHS versus LHS
This JEP focusses on the LHS (left-hand side) declaration by removing the need to specify a type. But recently, in Java 7, Java has introduced the diamond operator to eliminate verbosity in the RHS (right-hand side) declaration. With JEP 286, these two collide:
Pro and con: Refactoring
Some people have argued that, after JEP 286, refactoring can become easier. Look at the following, silly, example:
No matter what getSomeList() returns, it should work as long as it has the method isClosed. I think this is a weird example, because normally you would define an interface with isClosed and every class that implements this interface can be replaced/refactored as well.
There is a counter argument that can be made, refactoring can also be dangerous with JEP 286, look at this (crafted) example:
As long as the method generate returns a number the code works fine. But when someone changes the method to return an object or a String, it stops working without failing compilation. This argument however seems valid, but it would also break it you would have inlined the call to ‘System.out.println(SomeCode.generate() + 2);’.
This might make the problem a bit harder and more widespread. I believe there are more cases this can go wrong.
Try it out for yourself
The best way to get a feel for JEP 286 is just to try it out yourself!
There is a pre-compiled version of JDK-9 with JEP availabe for download at the website: iteratrlearning.
After looking at a lot of examples I’m still not convinced that JEP 286 is good nor bad. It can go either way. There are some good pros but also quite a lot of cons.
When discussing this JEP with co-workers and colleagues I often get the following reply:
The arguments you’re using have been used when C# adopted var/val, stop complaining, they did it.
But did you know most coding guidelines for C# warn you for using ‘var’?
Just read these guidelines from Microsoft:
- Do not use var when the type is not apparent from the right side of the assignment.
- Do not rely on the variable name to specify the type of the variable. It might not be correct.
- Avoid the use of var in place of dynamic.
- Use implicit typing to determine the type of the loop variable in for and foreach loops.
This, combined with readability, makes me lean towards a no for JEP 286 right now.
How about you? Leave a comment!