Wow, just wow…
Thanks a lot for all your support people, it’s great to see and read I’m not the only one to think this situation isn’t normal.
Blogs and news sites:
And it was featured on some major blogs/news sites:
Also, it is a hit on Twitter too, with hunderds of retweets and people taking aim at Shazam:
(which might be a bit unfair since Landmark Digital Services LLC is the complaining patent holder, not Shazam, but I’m not sure how they relate)
springify: RT @StefanRoock: RT @mittie RT @merbist: wow the Shazam guys are real jerks: http://sites.google.com/site/redcodenl/ #wtf asgerhallas: RT @tomasekeli: Hey #shazam, you suck :) Software patents, seriously uncool. Being a dick, even more so. #shazamfail http://bit.ly/cm4Y5f (via @blakefate) gerharbo: Patent issues ivm Shazam. Een developer uit nl heeft code in java, shazam niet blij http://sites.google.com/site/redcodenl/ unimatrixZxero: RT @ralph: Now that I've found SoundHound, I can safely delete the app from the jerks that made Shazam: http://sites.google.com/site/redcodenl/ LuisBenavides: RT @stilkov: Software patents just suck RT @merbist: wow the Shazam guys are real jerks: http://sites.google.com/site/redcodenl/
...etc etc etc...
So thanks again for the support and the emails en tweets and blogs… I’m going to wait and see now what the next reply from Landmark Digital Services LLC will be. In the mean time, with the article available you should be able to create something similair for yourself.
Or just take the describing scientific paper and/or the patent itself, that should be sufficient (and it isn’t too mathematical, nor hard).
Some time ago I received the following email about this project.
Mr. van Rijn,
I am Darren Briggs, the Chief Technical Officer of Landmark Digital Services, LLC. Landmark Digital Services owns the patents that cover the algorithm used as the basis for your recently posted “Creating Shazam In Java”. While it is not Landmark’s intention to alienate those in the Open Source and Music Information Retrieval community, Landmark must request that you do not ship, deploy or post the code presented in your post. Landmark also requests that in the future you do not ship, deploy or post any portions or versions of this code in its current state or in any modified state.
We hope you understand our position and that we would be legally remiss not to make this request. We appreciate your immediate attention and response.
Darren P. Briggs
Vice President &
Chief Technical Officer
Landmark Digital Services, LLC
This scared me a bit, why are they emailing me? I’ve written some code (100% my own) and implemented my own methods for matching music. There are some key differences with the algorithm Shazam uses.
The code isn’t published yet, but I was planning on releasing it under Apache License to the open source community soon.
It was never my intention to release this commercially, I’m just a programmer who likes to work on technical, mathematical algorithms in his spare time. And if enough people ask for the source code, I’d be happy to give it to them. Who would have thought that creating something at home in a weekend could result in a possible patent infringement!?
Just to be sure I asked them confirmation that the email was indeed sent from their company. And second, I’d like to know which patents are in play. Because I just couldn’t think that something this easy (music-fingerprint is a hash, and we do a lookup) can be patented.. Maybe in the States, but in Europe?
I got the following reply from Landmark Digital Services LLC:
Mr. van Rijn,
I can confirm that the email you received came from me on behalf of Landmark Digital Services, LLC. If you require a more formal notification, the Landmark legal department can provide you with a legal notification. Please let me know if that would be preferable.
The Landmark patents have been granted around the world, including the US, the EU, individual EU-member countries.
Examples of some of the Landmark patents include:
- System and Methods for Recognizing Sound and Music Signals in High Noise and Distortion
- Robust and Invariant Audio Pattern Matching
We appreciate your compliance with our request.
Darren P. Briggs
Vice President &
Chief Technical Officer
Landmark Digital Services, LLC
They are really serious about this. But there is a problem, I still don’t have patent numbers of European patents, so there is no way for me to check the validity of their claims. Once again I asked them for specific patent numbers. And got the following reply:
Mr. van Rijn,
The US patent numbers for the two examples I provided you are 6,990,453 and 7,627,477. Note that there are additional issued patents and pending patent applications in the US and Eu that cover these concepts as well.
Vice President &
Chief Technical Officer
Landmark Digital Services, LLC
Sigh, again two U.S. patent numbers. But well, lets take a step back. Why does Landmark Digital Services think they hold a patent for the concepts used in my code? Even if my code works pretty different from the Shazam code (from which the patents came).
What they describe in the patent is a system which:
- Make a series of fingerprints of a media file and/or media sample
(such as audio, but could also be text, video, multimedia, etc)
- Have a database/hashtable of fingerprints as lookup
- Compare the set of hashtable hits using their moment in time it happened
This is very vague, basically the only innovative idea is matching the found fingerprints linearly in time. Because the first two steps describe how a hashtable works and creating a hash works. These concepts are not new nor innovative.
But, with a bit of imagination one could (possibly) argue that my code (again, written completely by myself in a weekend with some spare time) does the same thing as the patent describes.
Just to be sure I asked around for advice, including help from the FSF (Free Software Foundation) and the EFF (Electronic Frontier Foundation). They forwarded my questions to Bits of Freedom a Dutch organisation for digital rights.
After a good conversation with Ot van Daalen (from Bits Of Freedom) he suggested I contact Arnoud Engelfriet, a Dutch ICT lawyer and patent attorney with a lot of knowledge about software patents.
In the last couple of days I’ve had quite a few conversations with Arnoud, and he helped me with a lot of my questions.
Here are some of the conclusions:
- Software companies can make your life very miserable if you don’t comply;
- Using the Java-Music Match code commercially will likely result in a lawsuit for patent infringement;
- Releasing the code under an Open Source license on a non-profit website (no ads) is a grey area;
- Writing/using this code privately can’t be patent infringement in the Netherlands;
And even the Arnoud even mailed me:
Als je de software laat staan, loop je de kans dat Landmark Digital Services je een proces aandoet. En zoals gezegd kan dat een fiks bedrag worden.
If you leave the software on your website, you run the risk that Landmark Digital Services files a patent infringement lawsuit. And like I told you, this could result in a substantial amount of money.
Since I don’t want to end up like Dmitry Sklyarov, with the possibility of a lawsuit, I’m not going to publish the code anymore… Grey area’s with lawsuits roaming around are better to be avoided. Especially if you think about the average cost of a patent lawsuit being 1 to 3 million dollars.
In the latest email I received from Landmark Digital Services they are even asking for more:
Mr. Van Rijn,
The two example patent numbers that I sent you are U.S. patents, but each of these patents has also been filed as patent applications in the Netherlands. Also, as I’m sure you are aware, your blogpost may be viewed internationally. As a result, you may contribute to someone infringing our patents in any part of the world.
While we trust your good intentions, yes, we would like you to refrain from releasing the code at all and to remove the blogpost explaining the algorithm.
Thank you for your understanding.
Vice President &
Chief Technical Officer
Landmark Digital Services, LLC
They are still unable to direct me to the correct Dutch patent numbers. But more shocking, they are now telling me that my blogpost may contribute internationally to patent infringement. But… doesn’t the patent itself describe their algorithm in much more detail? The idea of patents are that the world knows about technology and how it can be used, but they can’t legally commercially exploit it? Now next to asking me not to release the code, they are also asking me to remove the previous blogpost!
This seems like a very unjust threat to me, and for now I’m going to ignore that request. If they decide to file a formal legal complaint I might reconsider taking down the blogpost. The only action I’ll take right now is not releasing the source code.
I’ve also had contact with other people who have implemented this kind of algorithms. Most notible is Dan Ellis. His implementation can be found here: http://labrosa.ee.columbia.edu/~dpwe/resources/matlab/fingerprint/
He hasn’t been contacted (yet?), but he isn’t planning on taking his MatLab implementation down anyway and has agreed for me to place the link here. This raises another interesting question, why are they targetting me, somebody who hasn’t even published the code yet, and not the already published implementation of Dan?!
And if they think its illegal to explain the algorithm, why aren’t they going after this guy? http://laplacian.wordpress.com/2009/01/10/how-shazam-works/
This is where I got the idea to implement the algorithm and it is mentioned in my own first post about the Java Shazam.
So, has anybody else had these kind of experiences? What would you do in this situation?
The patent infrigement story continues here…
ps. I’m sorry John Metcalf: You can stop printing “Free Roy van Rijn” t-shirts…
After a comment on my previous blog entry (about creating a shazam clone) I started tinkering again.
Somebody asked: Could this be used to detect duplicate songs in my mp3 collection!?
That is exacly what I just tried!
Here are some examples:
Duplicate found: 01 - everything in its right place.mp3.song matches with D:\data\v2\01-radiohead-everything_in_it’s_wrong_place-h8me.mp3.song and score: 134
(note: This is a remix of the original, with some rap mixed in)
Duplicate found: 01 - Joy Division - Exercise One.mp3.song matches with D:\data\v2\114 - Joy Division - Exercise One (From Still).mp3.song and score: 255
(note: Yes, duplicate!)
Duplicate found: 01 The District Sleeps Alone Tonight.mp3.song matches with D:\data\v2\The Postal Service - The District Sleeps Alone Tonight.mp3.song and score: 636
(note: Yes, duplicate!)
Duplicate found: 01-AudioTrack 01.mp3.song matches with D:\data\v2\06.Richard cheese -Id like a virgin- Butterfly.mp3.song and score: 144
(note: Yes, duplicate, the second was a radio snippet with jingle in the song)
Duplicate found: 01-editors-papillon_edit.mp3.song matches with D:\data\v2\02-editors-papillon_instrumental_edit.mp3.song and score: 382
(note: Almost a duplicate, the second is an instrumental version of the first)
Duplicate found: 01-the_strokes-you_only_live_once-ser.mp3.song matches with D:\data\v2\01-the_strokes-you_only_live_once.mp3.song and score: 450
(note: Yes, duplicate!)
Duplicate found: 01-the_wombats-backfire_at_the_disco.mp3.song matches with D:\data\v2\Backfire @ The Disco [Promo Version].mp3.song and score: 493
(note: Yes, almost duplicate, the second is a radio-promo announcement with jingle in it)
With a bit of tinkering this algorithm could be used to make a tool to detect duplicate songs, even if the mp3’s aren’t similair. Even live versions and instrumental versions are detected if you lower the threshold.
A couple of days ago I encountered this article: How Shazam Works
This got me interested in how a program like Shazam works… And more importantly, how hard is it to program something similar in Java?
Shazam is an application which you can use to analyse/match music. When you install it on your phone, and hold the microphone to some music for about 20 to 30 seconds, it will tell you which song it is.
When I first used it it gave me a magical feeling. “How did it do that!?”. And even today, after using it a lot, it still has a bit of magical feel to it.
Wouldn’t it be great if we can program something of our own that gives that same feeling? That was my goal for the past weekend.
First things first, get the music sample to analyse we first need to listen to the microphone in our Java application…! This is something I hadn’t done yet in Java, so I had no idea how hard this was going to be.
But it turned out it was very easy:
Now we can read the data from the TargetDataLine just like a normal InputStream:
Using this method it is easy to open the microphone and record all the sounds! The AudioFormat I’m currently using is:
So, now we have the recorded data in a ByteArrayOutputStream, great! Step 1 complete.
The next challenge is analyzing the data, when I outputted the data I received in my byte array I got a long list of numbers, like this:
Erhm… yes? This is sound?
To see if the data could be visualized I took the output and placed it in Open Office to generate a line graph:
Ah yes! This kind of looks like ‘sound’. It looks like what you see when using for example Windows Sound Recorder.
This data is actually known as time domain. But these numbers are currently basically useless to us… if you read the above article on how Shazam works you’ll read that they use a spectrum analysis instead of direct time domain data.
So the next big question is: How do we transform the current data into a spectrum analysis?
Discrete Fourier transform
To turn our data into usable data we need to apply the so called Discrete Fourier Transformation. This turns the data from time domain into frequency domain.
There is just one problem, if you transform the data into the frequency domain you loose every bit of information regarding time. So you’ll know what the magnitude of all the frequencies are, but you have no idea when they appear.
To solve this we need a sliding window. We take chunks of data (in my case 4096 bytes of data) and transform just this bit of information. Then we know the magnitude of all frequencies that occur during just these 4096 bytes.
Instead of worrying about the Fourier Transformation I googled a bit and found code for the so called FFT (Fast Fourier Transformation). I’m calling this code with the chunks:
Now we have a double array containing all chunks as Complex. This array contains data about all frequencies. To visualize this data I decided to implement a full spectrum analyzer (just to make sure I got the math right).
To show the data I hacked this together:
Introducing, Aphex Twin
This seems a bit of OT (off-topic), but I’d like to tell you about a electronic musician called Aphex Twin (Richard David James). He makes crazy electronic music… but some songs have an interesting feature. His biggest hit for example, Windowlicker has a spectrogram image in it.
If you look at the song as spectral image it shows a nice spiral. Another song, called ‘Mathematical Equation’ shows the face of Twin! More information can be found here: Bastwood - Aphex Twin’s face.
When running this song against my spectral analyzer I get the following result:
Not perfect, but it seems to be Twin’s face!
Determining the key music points
The next step in Shazam’s algorithm is to determine some key points in the song, save those points as a hash and then try to match on them against their database of over 8 million songs. This is done for speed, the lookup of a hash is O(1) speed. That explains a lot of the awesome performance of Shazam!
Because I wanted to have everything working in one weekend (this is my maximum attention span sadly enough, then I need a new project to work on) I kept my algorithm as simple as possible. And to my surprise it worked.
For each line the in spectrum analysis I take the points with the highest magnitude from certain ranges. In my case: 40-80, 80-120, 120-180, 180-300.
When we record a song now, we get a list of numbers such as:
If I record a song and look at it visually it looks like this:
Indexing my own music
With this algorithm in place I decided to index all my 3000 songs. Instead of using the microphone you can just open mp3 files, convert them to the correct format, and read them the same way we did with the microphone, using an AudioInputStream. Converting stereo music into mono-channel audio was a bit trickier then I hoped. Examples can be found online (requires a bit too much code to paste here) have to change the sampling a bit.
The most important part of the program is the matching process. Reading Shazams paper they use hashing to get matches and the decide which song was the best match.
Instead of using difficult point-groupings in time I decided to use a line of our data (for example “33, 47, 94, 137”) as one hash: 1370944733
(in my tests using 3 or 4 points works best, but tweaking is difficult, I need to re-index my mp3 every time!)
Example hash-code using 4 points per line:
Now I create two data sets:
- A list of songs, List
The long in the database of hashes represents the hash itself, and it has a bucket of DataPoints.
A DataPoint looks like:
Now we already have everything in place to do a lookup. First I read all the songs and generate hashes for each point of data. This is put into the hash-database.
The second step is reading the data of the song we need to match. These hashes are retrieved and we look at the matching datapoints.
There is just one problem, for each hash there are some hits, but how do we determine which song is the correct song..? Looking at the amount of matches? No, this doesn’t work…
The most important thing is timing. We must overlap the timing…! But how can we do this if we don’t know where we are in the song? After all, we could just as easily have recorded the final chords of the song.
By looking at the data I discovered something interesting, because we have the following data:
- A hash of the recording
- A matching hash of the possible match
- A song ID of the possible match
- The current time in our own recording
- The time of the hash in the possible match
Now we can substract the current time in our recording (for example, line 34) with the time of the hash-match (for example, line 1352). This difference is stored together with the song ID. Because this offset, this difference, tells us where we possibly could be in the song.
When we have gone through all the hashes from our recording we are left with a lot of song id’s and offsets. The cool thing is, if you have a lot of hashes with matching offsets, you’ve found your song.
For example, when listening to The Kooks - Match Box for just 20 seconds, this is the output of my program:
Listening for 20 seconds it can match almost all the songs I have. And even this live recording of the Editors could be matched to the correct song after listening 40 seconds!
Again it feels like magic! :-)
Currently, the code isn’t in a releasable state and it doesn’t work perfectly. It has been a pure weekend-hack, more like a proof-of-concept / algorithm exploration.
Maybe, if enough people ask about it, I’ll clean it up and release it somewhere.
The Shazam patent holders lawyers are sending me emails to stop me from releasing the code and removing this blogpost, read the story here.
Yesterday, while browsing DZone, I encountered a couple of HTML 5 blogs. So I decided to try and code some HTML5 - Canvas myself.
This is the result of one hour trail-and-error:
Sorry, no support, please upgrade your browser!
Maybe I’ll do a write-up soon, but for now, just check the sources. (Yes they are wrong/hack-ish, I know…).
Here are (in my opinion) five of the most important ‘rules’ in software development.
The reason why waterfall doesn’t work and agile works a lot better. Changes happen. People will always change their mind, over time technology changes and situations change. So during software development you will have to be flexible with the requirements.
Create software that the client wants, not what he says he wants.
What the clients tells you he wants might not (entirely) be what he/she actually wants or the organisation needs. It is vital to ask questions and release early. When you have something visible, let the client judge it.
Some people say there is one major drawback of showing designs and the application early on, the client probably gets new ideas and wants to change the design. But that actually is a good thing, you still have much time left to plan these changes! If you do a waterfall-method and release on delivery date.. the client still wants his/her changes (!).
Always write down decisions.
During a project, especially during the startup, there are a lot of choices to be made. Is it going to be a web application, or standalone? What frameworks are we going to use?
All these decisions are made with arguments, pro’s and con’s, write those down! A project-wiki would be a perfect place. If some members leave the team and new members join the team this information is vital. When the new members join the team their first reaction will be “Why are they doing it this way and not …?”. Reading the argumentation will help them understand the situation and get them up to speed. Also, during projects there are always moments when you reflect on choices made in the past: “Why did we do … wouldn’t we have done better if we did/used …”. You can now always review the past arguments to see if the current situation has changed so much that a reconsideration would be in order or not.
The team owns the code.
If you write a piece of code, it is not your code. The code belongs to the collective, the team. If somebody changes your code, embrace it, it was probably needed. If you see an ugly piece of code you didn’t write, you are also responsible, refactor it.
Commit driven development.
Test driven development is a great idea. First write a unit-test, check to see if it fails, now write your code until the test succeeds. This way you know two important things, the final code is easy to test (it was written with testing in mind) and the code does what the test expects. But there is one major drawback, most people (not all) simply can’t work this way. For some people it works great, but some developers want to write their tests afterwards. And that is ok, there is just one important rule: Never check your code in (the trunk) if it isn’t tested and doesn’t have proper documentation (e.g. JavaDoc). So: It is no problem if you write code and then tests, or tests then the code, as long as you write them before commiting to your SCM repository.
There are situations when you need to write a lot of new code, in that case you still want to be able to check your code in, even if you aren’t done yet. For these situations you create a feature-branch. On this branch you develop all the new funtionality and once it is done, documented and tested, merge it with the trunk.
The last couple of weeks I have been playing around with compression/decompression algorithms. This is a field that has always intrigued me. It gives me a magical feeling, like a magician you wave some algorithm around and suddenly the files shrink and bytes dissapear. With the same motion you can undo all your actions and re-generate the original files from thin air!
Arithmetic coding is a different method to encode bytes. On itself it doesn’t compress, but it is the backbone of a whole family of compression algorithms. To explain how it works we need some example text we want to compress: “ABACB”
First we assign a probability to each symbol:
- A: 45%
- B: 40%
- C: 15%
Lets change these probabilities into ranges from 0.0 to 1.0:
- A: 0.00 to 0.45
- B: 0.45 to 0.85
- C: 0.85 to 1.00
Now we start reading our input (ABACB). The first character we find is ‘A’. First we do a lookup in the table, what range does ‘A’ have: 0.00 to 0.45. Lets remember these values as ‘low’ and ‘high’.
Time to encode the second character ‘B’. The first thing we need to do is reset our ranges to be between low and high:
- A: 0.0000 to 0.2025
- B: 0.2025 to 0.3805
- C: 0.3805 to 0.4500
We want to encode ‘B’, so we’ll end up with low=0.2025 and high=0.3805. Time to update the table again:
- A: 0.2025 to 0.2826
- B: 0.2826 to 0.3538
- C: 0.3538 to 0.3805
Encode ‘A’, we get low=0.2025 and high=0.2826. And adjust the table again:
- A: 0.202500 to 0.238545
- B: 0.238545 to 0.270585
- C: 0.270585 to 0.282600
Encode ‘C’, we get low=0.270585 and high=0.282600. Adjust the table one last time:
- A: 0.27058500 to 0.27599175
- B: 0.27599175 to 0.28079775
- C: 0.28079775 to 0.28260000
So, lets encode the final symbol ‘B’ and stop right here. Now we save/output a value betwee low (0.27599175) and high (0.28079775) which takes the least amount of bytes, for example 0.28!
Now I we want to decode our message “0.28” we start by building the ranges again, these are the same as the first table above. Now we look at which range our 0.28 fall in. The result is ‘A’ (between 0.0 and 0.45). So we output this symbol. Next we have to update the table, because we outputted ‘A’ we know our encoder encoded ‘A’, so we can follow the same steps as the encoder did and update the ranges in the same way, so we’ll end up with the second table (with values between 0.0 and 0.45). If we look at 0.28 it now falls into range ‘B’, so we output ‘B’ and adjust again.
As you can see this number “0.28” describes our whole message “ABACB”! So we encoded a whole message in one small number. There are a lot of improvements possible to implement this algorithm efficiently, for example look at range encoding. Another website that helped me a lot is this basic arithmetic coding by Arturo Campos.
The previous example works great, but there is a problem… if we assign equal probabilities to each possible symbol (each byte) we won’t compress anything! The example above works because I took bigger probabilities for A and B then for C… So for it to work we need to guess the correct probabilities, and the better we do this, the smaller our file gets! Luckely there are a couple of methods to assign/guess these probabilities.
Read and save table
One thing you can do is to first read the whole file and save the amount you see each character. Then you can scale these amounts into probabilities between 0.0 and 1.0 and save these probabilities to a file. Next we can use these probabilities to encode (and decode) our message! This will take quite a bit of overhead because you have to save the table..
Another thing we can do is to just start with equal probabilities for each symbol and adjust the probabilities during the encoding. If we use the same method in encoding and decoding our probability table will remain the same.
Prediction by Partial Matching (PPM)
So, we can adjust the probabilities at runtime and for example count all the symbols we have seen. But why stop there? For example in a piece of English text, the letter ‘E’ will probably have the highest count/probabilty. But if we encouter the symbols: “COMPRESSI” we know the next character will likely be an “O”, not an “E”! So what if we can enlarge our context? This is what PPM does, looking at the bigger picture. Not only count single symbols, but also combinations. The longer the combinations the more specific our predictions get (but also more memory is needed).
Context Mixing (CM)
With PPM you have one model which predicts the probability of the next symbol by looking at its past statistics. But why stop there? With Context Mixing (CM) you can have multiple models predicting the next symbol work together. When you do this right you and predict the next symbol even better and thus get better compression ratios! This is how the best current compression algorithms work.
This is just one family of compression algorithms, there are a lot more which I won’t describe here (yet?). Instead of processing/guessing the next symbol you could also replace long series of symbols with shorter unique symbols. This is known as dictionary coding, an entire other family of compressors (including LZ, zip, gz).
Its a problem I encouter in most JEE projects I’ve worked on so far. Handling null-values. But why is this a problem? And what strategies can we follow to reduce the problem? That is what I’m trying to find out in this post.
Lets start with a piece of business logic, in a world where we don’t have null-values:
This looks good and understandable! But this isn’t what I see in most projects. Usually I see code like this:
Wow, that is not a pretty sight, not at all! What can we do about it and how did it happen?
Inversion of logic
The first strategy we can follow is based on inversion of logic which I’ve blogged about before. The idea is that we exit early, this will improve the readability of our method. Lets see that happens to our method is we follow this pattern:
This is somewhat better, a bit shorter, but we still have all the ‘!= null’ and ‘== null’ checks we don’t want.
Code by contract
The best way to get rid of null-checks is to get rid of nulls in your applicaties. Just don’t return a null in all of the methods…! This sounds very easy and straight forward, but it is a bit harder then it sounds because it has become such a habbit.
The good thing is that current IDE’s are implementing the @NotNull and @Nullable annotations. With these annotations you can tell other programmers, your IDE and static code analysis tools what your idea was when creating a method:
It also helps you to clearly state your assumed preconditions:
It will also help you correct possible coding errors:
Using this method you have some more certainties. But it isn’t a silver-bullet on its own. We have to stop and think, where do the null-values come from?
Actually, when you stop returning null (which is entirely up to you and your team) there are still situations which are ‘unchecked’. Namely external API’s, frameworks, the ORM-mapper you are using. So everywhere where you execute methods that you haven’t written, you still have to do manual checks. But make sure you do this right away. Then further on in the code, you don’t have to check anything because of your @NotNull-contracts.
If you do this to the Person object above, and all its fields, you will end up with the beautiful clean code of the first example. No checks, its just always filled by contract. The only thing you want to add is information about the parameters:
In my experience this works very well, I’ve done this a couple of times, even before the @NotNull and @Nullable existed. Before this we would just add the information in our Javadoc. But with the IDE checking for it this has become a lot easier to use.
Null object pattern
A whole different approach then coding-by-contract is the use of a null-object. The idea behind this pattern is that you don’t return null, but instead you return a real object. In our example we would do the following:
The huge advantage is that you can always safely call “person.getName()” and “person.getAccounts()” because even if you have a NullPerson the object still exists. This Null-Object is obviously usually a singleton.
A more elaborate but general version of this pattern is the special case pattern, named by Martin Fowler. Instead of just having a Null-special case you could also develop:
Now you can return much more information then just a meaningless “null”!
There is just one problem with this pattern. It also doesn’t just solve your problems. Why do we want to calculate the total balance of a NullPerson? The moment you retrieve a person and it is an instance of NullPerson, catch it and handle the situation appropiatly, don’t just continue.
Safe null operator
For a time there was speculation that the Safe-null-operator would make its way into Java 7 through Project Coin. If you’ve programmed in Groovy you might have seen it before:
The idea is that you use “?.” instead of “.”. The questionmark means: “If the variable we’re calling is null, the result is null, else, call the next method”.
So in this case:
- If person is null: return null, else:
- If getName() is null: return null, else:
- Return result of substring(0, 1)
If you would now write this code it would be:
You could also give a default value:
If person or getName() produce a null the “?:” operator will return the default String.
Sounds pretty good eh? The only problem is that this new operator didn’t make the final list of changes for Java 7.
Fun fact: Do you know why the “?:”-operator is called the “Elvis”-operator? When viewed from the side, as smiley , it looks like Elvis. Including the big curl in his hair.
For the Queue ICPC programming game Capture I ran into a geometrical problem.
While programming my little robot I wanted to have an algorithm calculate this for me:
- I have a field with 122 points
- I have a circle of fixed size
How do I calculate where to put the circle so it encapsulates the most points?
This is what I came up with, three algorithms:
Algorithm #1: Centerpoint
The first algorithm was created as a test. It doesn’t find the perfect solution, but gives a decent solution.
First I loop over all the points, and I check if there are points located at CIRCLE_RADIUS length from our point. This returns a score. The best point has a lot of other points nearby.
This algorithm is very quick, but it has a huge disadvantage, the circle will always have one of our points as center.
It will never yield the perfect solution… In the following picture this algorithm produces the green circle:
Algorithm #2: Heatmap
The next idea I got was based on a heatmap. It is very computer heavy, but it will generate the best solution.
It works like this (psuedocode):
The pixel with the best score is used in most circles. Thus, this would need to be the center of our circle!
In the image above you can see the resulting heatmap, and the cyan circle is placed on the best possible pixel.
(In most cases, there are more then one ‘best pixels’, which one you choose doesn’t matter)
Algorithm #3: Bron-Kerbosch-myself
The heatmap algorithm, mentioned above, takes a LOT of processing time and is far from the most efficient algorithm. For the Queue ICPC contest there is a time limit for each calculation, and I need it to speed up!
So I started to wonder:
- What do all the points in the perfect circle have in common?
Well, they all have a maximum distance to each other of CIRCLE_RADIUS * 2.
So I started to play with graph algorithms.
The lines you see in the image above are all the points that can be connected with a maximum length of CIRCLE_RADIUS * 2. All the lines show points that could possibly be in the same circle together.
Then I googled a bit and found the term ‘clique’. A clique is a group of points that all point to each other, just what we want with the graph shown in the picture!
When I searched a little further I came across the Bron-Kerbosch algorithm. It finds the perfect maximum clique for a given undirected graph! Again, just what we want!
Here is the pseudocode (from Wikipedia):
But I ran into a little problem, I’d made a false assumption… If you have a triangle of three points, with all legs size N, and then draw a circle from the center… none of the points fall into the circle.
Now I tried two ways to solve this, the first one is to use a smaller distance in the graph. If you only select points that are located (cosine(30) * CIRCLE_RADIUS * 2) every point will eventually fall into our circle. But this could eliminate the perfect circle obviously if two points in the perfect solution are further apart then (cosine(30) * CIRCLE_RADIUS * 2).
Then I desected the Bron-Kerbosch algorithm, and decided to change it a bit. This is what I ended up with:
This returns all the maximal cliques that fall into our circle, and we are always sure they fall into our circle..! I think this will always generate the perfect solution, just as the heat map. In this the above picture, this algorithm produces the red circle.
It already is light years faster then the heat map algorithm, but I still need a bit more speed. I’m still struggling to improve my implementation, like trying to use a pivot as bronKerbosch2 (see wikipedia again) does.
Algorithm #4: ???
For some reason I still fail to believe I’m the first person in the world to tackle this problem. But I haven’t been able to find an algorithm for this. Some people recommended using a sweeping algorithm but I don’t understand how this is used to find the circle’s location. Others pointed me to Voronoi maps…
So if you have any suggestions, or know the algorithm I’m looking for, please add a comment :-)
- Why, when one color has run out of ink, you can’t print anything!?
- Why does a pencil cost $0,50 and does an ink-cartridge cost $30,-?
- Why does a pencil still work after 5 years and is an ink-cartridge completely dried up?
- Why is a new printer (including ink cartridge) sometimes cheaper then a seperate ink-cartridge (crazy!)
- Why do printers eat paper for lunch?
- Why is printer-software so invasive? What happened to a simple ‘driver’
- Why does most hardware run perfectly under Linux, except printers?
Screw this, I’ll just mail my report instead of printing it…
Printer-bashing comic: Why I believe printers were sent from Hell
Yesterday I found and programmed a nice little algorithm. I’m not going to tell you (yet) what it does and how its called, but I’ll just show the code:
So tell me, what does this do, and what is the algorithm called?
And indeed (it was a simple one) the correct solution is multiplication, and to be specific, Ethiopian or Russian Multiplication.
I’ll probably do more, harder ones, in the future..!
Recently I placed a comment on this interesting blog from Uncle Bob Martin (Robert C. Martin). It contains a brief description on how I teach people how to use the Spring Framework.
Now, by popular demand (one person requested it over Twitter), I’ll guide you through the method and examples I use in this blogpost. It explains why people should use frameworks like Spring and/or Google Guice, and how.
Lets take a look at the application we have. We have a service:
And we have a DAO (data access object) which will take care of persisting our order:
The implementation of this DAO isn’t really interesting, lets just pretent it goes to the database:
And finally we have our service implementation:
Lets pretent that we are good obeying programmers, although we didn’t write our unit test up front, we are going to do it right away.
So… we want to test the service, how are we going to do this? Well, lets just create the service, call it, and then check if it does what we want it to do!
We run into a problem rather quickly. Now this code makes a connection to the database and it might or might not save our order. But how do we check this? We could go to the database and check…
NO! Never go to the database in a unit test. We only want to test a single unit of work, and the database ain’t one of them.
We need to refactor our code…!
One possible solution to this problem is using a factory method. A factory creates objects so you don’t have to. Its best shown using an example:
So, lets dissect this.
The method get(class, object) will return an implementation for a given Class. It uses a Map to get the mapping. The map is created during the static initialization (yuk!) and contains the default mapping, but we can also provide our own mapping using replaceMapping(class, object).
Now how do we use it? Lets see our new MilkRequestServiceImpl:
As you can see we no longer need the constructor, we get the required DAO from the factory when we need it.
Now we can test it using the replaceMapping method:
As the comment says, you are probably much better of here using a mocking framework like EasyMock or Mockito. But the point is, we can test our class now without going to the database! We got our control back.
Instead of writing a factory, there is a better way to do this. It is the Hollywood principle “Don’t call us, we’ll call you”.
All the dependencies of the objects are injected into them instead of created (initial example) or retrieved (factory example). So our service will look like this:
Now lets take a look at our test-code, it now looks like this:
We now have full control over the wiring. First we create our own little testMilkDao (use a mock here!) and we place it into our service. That’s all. Instead of the hidden creation in the service itself or in the factory we are in full control.
But if you want to use this application you’ll need “something else” to create all the objects and do the actual wiring. Lets create our own Spring or Google Guice..!!
The wiring we do in the constructor is what Spring and Google Guice would do for us. In the case of Spring you’ll write something like this:
Now you’ll still need to retrieve these “beans” from Spring, this can be done using our old-friend the Factory!
It’ll look something like this:
This instructs Spring to load the XML file and create all the objects. Then we retrieve the milkRequestService and it is fully wired and ready to use!
Google Guice works exacly the same, only instead of using XML to define the mapping it only uses Java code and annotations.
First we tell Guice which implementations are the default implementations, for example:
We also do this to the MilkRequestService. Next we tell Guice where to inject these implementations, in this case using ‘Constructor Injection’:
Now if we want to use it, we retrieve it from Guice like so:
Thats about it, you’ve seen:
- Why you should use these patterns
- How you would program them yourself (ugly monkey-code-style)
- How using Spring (with XML)
- How using Google Guice!
Good luck and happy programming!
ps. The code examples are just examples…!!
What if your favourite movie was remade as a porn movie? How would it be called, what would the title be?
My collegues and I were discussing this, and we came up with a top 10. We’ve also started a twitter hashtag.
- Shaving Ryan’s Privates (everybodies favourite)
- The Curious Taste of Benjamin’s Bottom
- Honey, I shagged the kids
- Missionary Impossible III
- Good Will Humping
- Free my Willy
- Legally Boned
- I Know Who you did last Summer
- Schindler’s Fist
- Cliff Banger
And some honourable mentions to the titles that just didn’t make the list:
- When Harry ate Sally
- Saturday Night Beaver
- Swollow Hal
- Sorest Rump
- Bangs of New York
- American Bangster
- Thighs Wide Shut
- Citizen Came
- The Green Milf
- The Porn Identity
- Crouching Woman, Hidden Camera
Why does my car do this to me? It bugs me everyday. When I drive with the snow outside I have to use my windscreen washer quite a lot. When I pull the little lever behind my steering wheel it spurts a jet of antifreeze/washer fluid onto my screen and it starts to wipe for about 10 times. Then it stops… my window is clean again!
But after about 15 seconds it does one final sweep… and this last sweep always leaves ugly marks!
Why do cars do this!!!? First it cleans my window perfectly, and then spoils it!
A collegue adviced me this: If the car has just finished wiping, quickly switch on the manual and stop it. This will still cause one extra wipe, but it comes right after the others. This produces a slightly better result.
I tried this two times on my car (a Fiat Bravo) and it worked one time, the other time it still did the final whipe after around 10 seconds.
Sometimes I let my mind wonder and I get crazy questions. Today was a good example, I encountered a MD5 hash and I started to wonder, would there be a hash which would (when hashed again) be the same?
Thus: MD5(x) = x
This would be a kind of MD5 quine, when fed into the algorithm you get the original value back. This is actually called an MD5 fixed point.
Information I’ve found
So I started investigating, soon I discovered this website about collisions. Its well known that all hashing algorithms must have collisions, you can’t always produce a unique hash for input larger then the output of course.
The output of the MD5 sum is 128 bit (16 byte) long, so the input should also have the same length. But the MD5 algorithm is defined to have 512 bits as input. This isn’t really a problem because the algorithm will extend smaller input with padding. Lets assume the MD5 sum of any input is uniformly distributed over all possible sums, then the probability that a 128-bit string is a fixed point is 1/2^128. This isn’t a crazy assumption because all hashing-algorithms are designed to distribute the output as uniformly as possible to avoid collisions.
So, the probability that no 128-bit string is a real fixed point is (1 - 1/2^128)^(2^128). The probability that there IS a fixed point is 1 - (1 - 1/2^128)^(2^128).
Since the limit as N goes to infinity of (1 - 1/N)^N is 1/e, and 2^128 is most certainly a very large number, this probability is almost exactly 1 - 1/e = 63.21%.
But, of course, there is no randomness involved here - there either is a fixed point or there isn’t. But, we can be 63.21% confident that there is a fixed point. (Also, notice that this number does not depend on the size of the keyspace - if MD5 sums were 32 bits or 1024 bits, the answer would be the same).
Looking for the fixed point
I’ve just implemented a small program to look for these hashes, even though I know it will take millions of years to check all the numbers. But you never know, I might get lucky ;-)
The first algorithm I created took a single random String as input and kept applying the algorithm to the output. Eventually it will:
- Go into a loop
- Find a fixed point (which is a loop of size 1)
The weird thing is, if it ends up in a loop of size 1… I’ve found two things. Not only a md5 fixed point, which creates itself after applying the algorithm. But also an input-value that produces this md5 as output, a collision!
Another interesting thing would be a complete graph of all md5 answers. Which loops can we find, which md5 has most collisions etc. But this would take eternity to calculate, even using all the machines in the world.
- Are there loops? (It is possible there aren’t any loops at all…?)
- Are there loops of size 1, a.k.a. fixed points?
- Which/how many 128 bit combinations can’t be created with the input values?
- Which/how many collisions will you get with all the input values?
More thoughts, errors, solutions..?