Friday, 26 November 2010

Dynamic DNS for EC2

I have just had to setup dynamic DNS for one of our test EC2 instances. We use DNS Made Easy for our DNS who allow dynamic DNS updates. Once I had setup the record in their systems a very quick groovy script was all I needed:

def ip = getPublicIP()
def id = "[recordID]"
def username = "username"
def password = "password"

def url = "${username}&password=${password}&id=${id}&ip=${ip}"
def result = url.toURL().text
println result

def getPublicIP(){

Wednesday, 20 October 2010

Software Craftsmanship 2010

Bit of a late one. Four of us here at DM attended the SC2010 at Bletchley Park on October 7...not sure why I'm writing this up now. Anyway, good times were had - it was a really nice setting (where the machines that cracked the Enigma were developed). There were many interesting talks available, it was a shame that they were running in parallel. Anyhow, here's a recap and my thoughts on the talks I attended.

Refuctoring Masterclass

Funny lesson, and some great comments from the crowd. We had our task of creating an obfuscated program that would print out "Hello, world!". We also had a unit test to ensure that the program, when running its
method returns that correct string. We started out by creating several helper methods, named "execute", "perform", "execute1" and so forth. When asked for a new name, someone shouted, "executel" as
look so similar in a monospace font.

James and I created an array of letters and created our hello world string by concatenating various indexes of the array together. We then had to swap computers with someone else and change their code to add a new word to the string. The code we received was creating the string through and a random number generator and seemed to be doing some cryptography of some kind. Crazy. It was only until later that I had the idea of executing system calls to a long-winded series of scripts; downloading the "hello" GNU package and compiling and running the Java example; AST transformations etc etc. Damn my lack of coffee.

The Game of Life

Interesting one. We were introduced to the concepts of Conway's Game of Life and were shown a crazy video of it being implemented in APL. I've never seen that language before and it blew my mind while also making me think "?!??!!??!" Turns out APL has its own keyboard with 50 extra keys or so - insane!

James and I chose to use Groovy for our implementation, and we were joined by some fellow codes with Java - a good chance to show others the power of Groovy. We started by writing a series of tests, focused on the life of a 3x3 grid. Developing functions to operate on this small grid, we rationalised we could simply apply them to the entire grid by iterating over the grid 3x3 cells at a time, and that would be that.

Unfortunately we didn't get a chance to fully complete the program as we got a little stuck on how to deal with the 3x3 grid being "outside" of the board's cells, e.g. in an 8x8 grid (only rows 1 and 2 on the x/y axis would be 'filled'). James later finished the code by himself - I'll get him to post it. A few completed the code in time and displayed it - one was created in F#, which seemed a cool language, one in Python (woo!) and one in Javascript - nice. I guess we spent more time testing the code than just blazing into it

Functional Koans – a fun way to learn functional programming in Scala, F# or Javascript

This looked interesting, I use Python a lot and I like its functional abilities, I'd like to improve on them. Me and James decided to use Javascript and Scala - however it took us almost 30 minutes to get set up since the code was hosted on GitHub, and it seemed all the attendees were IP blocked from logging in. We ended up getting the code from passing around a USB stick and from creating a read-only account on David's laptop.

Once we had Scala installed and ready, we had to download the Scala koan's dependencies via Maven, which James had never used. This took almost 30 minutes. We blazed through some of the JavaScript tests, only getting towards the interesting functional ones toward the end, where we were running out of time to get the Scala koans going. We stopped the JS and started Scala, only to be doing language syntax introductions, showing us how class type can be checked, how to assign variables etc. The JS ones were like this too - we should have skipped those, in hindsight.

We had the expectation that the koans would serve to show the use of functional programming, not the syntax of the language. But I guess that's a slight necessary for people unfamiliar with a language.

Overall the conference was great, enjoyed a beer in the bar afterwards and a long drive home to Wales.

Friday, 3 September 2010

Grails Redirect After Post

Currently in most of our applications we do not perform redirects after post to our controllers.

This is bad practice and needs changing (with all of our free time we have).

The current difficulty we have with this is that when we post a form to our controller and there are validation errors, we need to redirect to the form view again but keep the values that were posted in the first place.

For example :
  • Edit user.
  • Get user object.
  • Put user object in model
  • Populate input fields with properties from user object
  • Submit form.
  • Validation Errors.
  • Redirect back to edit user.
What now happens is that because the form gets the values from the user object it can't keep the changed values that are in the command.

Mark and myself came up with what we think is the best practice.

In our grails command we define a constructor that takes the object and populates the command.

In our controller we then get the object we wish to edit. Construct the command object and pass this into the model.

So now our model only ever deals with our command object.

When we redirect after post now all we have to do is place our command object in the flash scope and it will be accessible in our redirected method.

A simple check of existence and we can decide if we want to just place the command in the model or construct the command object again.

For example:
  • Edit User
  • Get User
  • Construct EditUserCommand using user object
  • Place command in the model
  • Populate input fields with properties from command object
  • Submit form.
  • Validation Errors.
  • Redirect back to edit user.
  • Check for command.
  • Place command back in model.

Below is some sample code for the User example case.

class UserCommand {
    String name

    public UserCommand() {}

    public UserCommand(User user) {
        name =       

    static constraints = {
        name(nullable:false, blank:false)

class UserController {
    def edit = {      
        def user = getUser()       

        if (flash.form == null) {
            flash.form = new UserCommand(user)

        return render(view:'edit', model:[form:flash.form])

    def update = { UserCommand form ->       
        if (form.hasErrors()) {
            flash.form = form
            return redirect(action:'edit')

        return redirect(action:'detail', params:[])           

Our view will now only deal with the command object and can access all user fields it needs through the command object, as well as all error messages returned by the command validation.

Wednesday, 1 September 2010

Revisiting .NET

Its been a fair while since we've been commissioned to do any projects with .NET, and it looks like Microsoft have been busy in the interim!

First some background. Mark and myself both started out working for a Microsoft Certified Solution provider, and back then most of the coding work was in Visual Basic. Web applications were starting to get more sophisticated (our first "web" applications were old school ASP with Visual Basic ActiveX DLLs running on IIS3 and 4) but everything felt a bit hacked together. We then did some work with MTS (Microsoft Transaction Server) which improved things a bit, but it wasn't until the first versions of .NET came along and Mark and myself persuaded the company to let us do a major project using C# that web applications became the norm for the company.

Since we set up Digital Morphosis, however, our focus has been on cross-platform web applications, mostly written in Java (and more recently Groovy & Grails), although we've done a handful of .NET applications since, so its been a bit of a learning curve getting back into .NET development.

Our latest .NET project has allowed us to get right up-to-date with the latest and greatest .NET technologies, so I've learned quite a bit about the state of play of .NET and how it compares with how we do things in Java, and have lots of things to write about, notably:

  • MVC 2 and how similar it is to Spring Web MVC and Grails
  • Getting Visual Studio 2010 express set up to run NUnit tests
  • Unit testing with NUnit and Moq
  • Continuous integration with .NET and Hudson

I'll be posting more over the next few weeks. Hopefully this should serve as some help for Java developers who have to make the switch.

Thursday, 26 August 2010

Tomcat Cluster Configuration without multicast

When trying to set this up, I could not find much on the web or in the tomcat documentation on how to get this configured.

This post should fix that. Apologies for the unpolished nature, but I wanted to record the configuration I got working as soon as possible and am pressed for time at the moment.

The basic configuration for the cluster on the first server is shown below.

<Cluster channelSendOptions="8" channelStartOptions="3" className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" domainReplication="true" notifyListenersOnReplication="true" />
<Channel className="">

<Membership className="org.apache.catalina.tribes.membership.McastService" bind="" domain="test-domain"/>

<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" />
<Receiver address="" className="org.apache.catalina.tribes.transport.nio.NioReceiver" maxThreads="6" port="4000" selectorTimeout="5000" />
<Interceptor className="" />
<Interceptor className="" />
<Interceptor className="">
<Member className="org.apache.catalina.tribes.membership.StaticMember" port="4001" host="" domain="test-domain" uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}" />
<Interceptor className="" />
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;" />
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener" />


Other servers have a similar configuration, but with other Members and modifications to the ports that are specified.

Setting channelStartOptions to 3 is supposed to disable multicast according to the tomcat docs:

<Cluster channelSendOptions="8" channelStartOptions="3" className="org.apache.catalina.ha.tcp.SimpleTcpCluster">

The Receiver port needs to match the port of the Member element on the Member server configuration. So if we have a receiver port of 4000 for Server A, we need to make sure that where the member is defined on other servers that port 4000 is used, so

Server A:
<Receiver address="" className="org.apache.catalina.tribes.transport.nio.NioReceiver" maxThreads="6" port="4000" selectorTimeout="5000" />

Server A Member defined on all other cluster servers:
<Member className="org.apache.catalina.tribes.membership.StaticMember" port="4000" host="" domain="test-domain" uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1}" />

Obviously the uniqueId for a Member needs to be unique for each cluster member. It needs to be specified as shown as a byte array of exactly 16 bytes.

This DisableMulticastInterceptor prevents multicast messages being sent. This class is not included in the tomcat distribution, but need to be as below:

public class DisableMulticastInterceptor extends ChannelInterceptorBase { @Override public void start(int svc) throws ChannelException { svc = (svc & (~Channel.MBR_TX_SEQ)); super.start(svc); } }

Deploy to Tomcat with no Downtime

Java web applications hosted in tomcat, can have a considerable startup time, particularly when libraries such as hibernate are used. Unless you have a large scale web site with multiple web servers, this means that deploying a new version of the application means a restart of the tomcat server, and this means downtime for the application.

It is however possible to eliminate this downtime by making use of tomcat's clustering capabilities.

The idea is that we define 2 tomcat instances in a cluster, fronted by the Apache web server. In normal operation1 only one of these tomcat instances is running. When we are ready to deploy a new release to the server, we deploy the war file to the tomcat instance that is not currently running. We start up this instance which then joins the cluster and has all the session information from the existing instance replicated to it. We can now test our application by connecting directly to the tomcat http connector ports.

Once we are happy that the application has deployed successfully, we switch the apache configuration to send all requests to the updated instance. As this has the session information replicated to it, users do not experience any downtime. Once the original instance has finished processing all of it's requests, it can be shutdown.

Should there turn out to be a problem with the release we have just performed, it is a simple matter to startup the old instance and change the apache configuration to switch back to the old instance.

With some clever scripting it is possible to manage deployments of different web applications within the cluster using the same techniques.

We can now perform deployments to our live servers without fear of downtime or unexpected errors. This can have a profound effect. No longer do we try and roll a number of features up into a single release, but can release more frequently increasing the rate at which we deliver useful features to our clients.

I will post another article on the details of the cluster configuration we have used detailing how to disable multicast (which does not work on Amazon EC2)

Friday, 30 July 2010

Grails one-liner package and install plugin

As we all know, running Grails with source plugins can cause a few oddities and many people prefer to run their plugins packaged. One bug I experienced on Grails 1.1 was uninstalling a source-linked plugin deleted the source code! Argh!

I was getting annoyed with having to have multiple terminal tabs open: one to package my plugin and another to install it, wait 3 seconds, type "y" and then to run my application. The command below solves my problems:

pushd ~/sites/plugin-a/; grails package-plugin; popd; yes | grails install-plugin ~/sites/plugin-a/; grails clean; grails run-app

I pipe "yes" to grails-install plugin; otherwise you'll be prompted to type "y" or "yes".

You can also combine these if you need to package multiple plugins.

pushd ~/sites/plugin-a/; grails package-plugin; popd; yes | grails install-plugin ~/sites/plugin-a/; pushd ~/sites/plugin-b/; grails package-plugin; popd; yes | grails install-plugin ~/sites/plugin-b/; grails run-app

It gets a little unreadable but can certainly save a lot of waiting/repeated typing.

Wednesday, 28 July 2010

Grails: Debugging GSPs: "errors at line [x]"

Debugging GSPs can be a right pain, especially with a heavily translated site: this often involves nesting g:messages inside virtually every other tag.

One of my annoyances was that the line numbers reported by the Grails parser were often wrong. "Error on line 430", it would say. "But my GSP only has 150 lines!" I shout back at my screen

I believe the problem is from nesting templates - when a GSP renders a template, that in turn renders a template, and so on - then the code grails generates is one file concatenated from each file that's been rendered.

So, how do you find where in your code things are going wrong? By appending ?showSource=true to your URL (or &showSource=true if you're passing in GET parameters). That's how.

This shows you Grails' generated file - I've found that the line numbers match, as you'd expect.

Increasing file handle limit in Ubuntu

I got the below error whilst trying to run my Grails application this morning:-

Caused by: error=24, Too many open files at java.lang.UNIXProcess.(

After a bit of digging around, it appears that some of the later versions of the Java JVM 1.6.+ are not closing unused file connections, which results in the default limit that is set for users in Ubuntu being hit and the above exception being thrown.

After searching the web I found the following solution.

To modify the limit for the current user session you can simply type the below into your bash terminal:-

$ulimit -n 2048

This will double the default value, but the changes will be lost upon restart of the machine.

To include a change in the system configuration, which will not be lost on restart, you will need to do the following:-

1. Edit limits.conf

$sudo gedit /etc/security/limits.conf

to include the line:

YourUser - nofile 51200

2. Update the common-session configuration

$sudo gedit /etc/pam.d/common-session

to include the line:

session required

Logout and log back in again and that should solve it.

Type "$ulimit -n" into a terminal and you should now see your new limit has been set, if not try giving the machine a restart.

Happy hunting...

Thursday, 8 July 2010

Keep it simple and staying up to date

Just watched a presentation on InfoQ

It reinforced a principle we try and adhere to on our applications - Keep it simple. We are very good at doing that as far as the user interface is concerned.

Aware Monitoring's have applied this to their development and makes design and technology choices based on their current skill set and limit the technologies they use.

This is certainly a very sound strategy for helping keeping complexity and cost under control, but this need to be balanced against the need to keep up to date and able to take advantage of new and emerging technologies. Without doing this you run the risk of being made obsolete by a someone using a new technology that makes them much more productive or changes the game in new unexpected ways.

We love technology and enjoy trying new technologies out. We perhaps need to evaluate how much this is costing us and make sure we have got the balance between "Keep it Simple" and staying up to date right for us.

Monday, 28 June 2010

Getting IP Addresses of Local Machine in Groovy

Actually gets the IP Addresses of the network adaptors in the machine rather than just the loopback address (

NetworkInterface.getNetworkInterfaces().each { iface ->
iface.inetAddresses.each { addr ->
println addr.hostAddress

Friday, 25 June 2010

Finding the request contextpath using Freemarker

I spent a good while Googling how to reference the request context path in Freemarker, and finally tracked down a solution on the Spring Source forum.

If a controller is intercepting requests at /app/blah/something, and your view loads CSS from static/css/, then your view will try and resolve the CSS at /app/blah/static/css/ -- which doesn't exist.

The solution is simple: in your views.xml, define the "requestContextAttribute" property, and assign it a value.

<bean id="viewResolver" class="org.springframework.web.servlet.view.freemarker.FreeMarkerViewResolver">
<property name="cache" value="true" />
<property name="prefix" value="" />
<property name="suffix" value=".ftl" />
<property name="exposeSpringMacroHelpers" value="true"/>
<property name="requestContextAttribute" value="rc"></property>

Now in your templates you can use ${rc.contextPath} and your CSS will try and resolve to /app/static/css/ - perfect!

Wednesday, 16 June 2010

AppEngine first steps

I have just got a prototype of a small application for a client. This takes input data, processes it and displays the result.

I needed to get it up on a web site so the client could have a look at it. Seemed like the perfect excuse to have a go at Google AppEngine.

I downloaded the SDK and Eclipse Plugin. As I had already created the web application, I could not use the support in the plugin to run the application out of the box. The fix was quite simple though, I created a AppEngine project, then copied the launch configuration. Playing with the path to the war folder and the classpath, soon got the app running in the local AppEngine server.

The first issue I ran into was that Freemarker does not work on AppEngine. A quick google search, turned up a patched Freemarker jar for AppEngine - perfect. I put this into our ivy repository, changed the dependency in the project, rebuild and the application runs perfectly.

Now to get it live. I created an application in AppEngine and used the SDK to upload the application - this is really easy - ./ update /path/to/war/folder.

After a few minutes all seems ready, however when I go to the application, an error page is displayed. What is wrong? After I found out how to get to the admin console (the URL is different if you have a Google Apps for Domains Account), I found a log viewer. This showed some more Freemarker errors.

java.lang.IllegalAccessError: Class$1 can not access a member of class freemarker.ext.jsp.FreeMarkerPageContext21 with modifiers "static"

Turned out to be a silly mistake. Another ivy dependency was bringing in the old Freemarker jar file. I did get to find out about how AppEngine keeps old and versions of your application as well and lets you access them via alternate URLs of the form http://[verion].latest.[your-app] Very handy for final testing before "going live".

Tuesday, 25 May 2010


We have been using the grails web framework for some time now. Our use of grails is however not very mainstream.

Our applications are typically composed of a number of REST based web services that are consumed by the website. This means that the website has no persistence requirements as that is all handled by the web services.

We are therefore not using GORM at all, which is an area of grails that brings great productivity boosts. Added to that many of the plugins available for grails assume persistence is available.

We really like the idea of composing web applications using plugins and can see grails making good advances in this area, particularly in 1.2. We also love GSP and the ease by which tags can be created and of course Groovy.

Based on our experiences and the fact that the next core grails release is likely to be Q2 2011, we are now investigating what it would take to produce a system which is tailored to our style of development using backend REST services.

Part of this is looking at how the whole process of dispatching requests works and the possibility of including this in a Groovy DSL. Something along these lines:

.execute {
model.dashboard = adminService.dashboard
"admin.dashboard" // return the view name
.onServiceNotAvailableException {

This gets all the information that is relevant to the processing of a request in one place. With spring applications, and even grails, when you are working on a single controller, there can be quite a lot of jumping around between files, this avoids that.

As long as the inline closures in the DSL do not get too big this will present most of the relevant information in 1 file. I don't think the code will be too big - (you don't put logic in your controllers do you?)

As well as supporting basic mappings to controllers, we could also support pattern based matches such as /admin/user/edit/{id}. The named parts would be made available to the closures in the same way that grails / spring does.

Supporting wildcards would allow us to apply some features to groups of urls. e.g.


Another requirement we often have is to change URLs for some content. Typically this would be handled by a 301 redirect using mod_rewrite. We could support this in the DSL. e.g.


It is still very early days, but once we have any code, we will be putting it up on github. Stay tuned..