Friday, 28 December 2012

Hi, I'm a [INSERT LANGUAGE HERE] developer!

There is a trend among software developers to define the job by a language or framework we either use most or have most experience with.  If someone claims to be a "PHP developer", this has far different implications than if someone says they are a ".NET developer".

This post relates to the implications of a few of these titles.

If you consider yourself one of the below and feel misrepresented, please do not be offended!  These are based on my own perception and experiences.  Please feel free to comment on the article to tell me I am wrong.

PHP developer

For developers at a similar age to myself, it is often a developer's first language after HTML.  This is because it is/was cheap/free to get some hosting that allows PHP to be run.  They can make a simple website and want to make it more dynamic.  Still using PHP implies the developer has not ventured out of this comfort zone.

Also, the ability to hack something together in a script (coupled with the fact that the language has horrible inconsistencies) means a PHP developer will often be perceived as someone who can put something simple together without much thought given to architecture or testability.

While there are some PHP projects like this (not at DM!), it is possible to write good PHP.  Even Facebook (not exactly small-fry) is partly still using PHP.
Development can be test-driven with PHPunit, and there are frameworks (such as Yii) which are a great way to enforce MVC.  I hope that kids who are picking up a bit of PHP will find Yii or similar, and will learn a bit about MVC (though better still, they could learn web programming with rails or something).

Java developer

While the language is pretty old now (and is a bit dated in some respects) the Java ecosystem is thriving.  Its open nature has led to the availability of many libraries and frameworks.  Additionally, the Java ecosystem is not just limited to the Java language.  Scala, groovy, fantom, clojure and many more.

If someone introduced themselves as a Java developer, I would assume they had an understanding of architecture that I might not assume a PHP developer had.  I would assume they had the capacity to produce more testable, maintainable code.  Most importantly, due to the number of libraries and frameworks available, I would assume a Java developer had experience in to identifying what available tools best suit the task at hand (albeit still within the Java-ecosystem).

.NET developer

While .NET is a framework rather than a language, I do not often hear people introduce themselves as a C# developer or VB developer.  A key difference between Microsoft's approach and the Java ecosystem is Microsoft has historically hidden the implementation from the developer ("The framework takes care of that for you!") while in Java-world you have more control.

In my view, a .NET developer has experience with the framework which means they can put something together to do the task at hand in the Microsoft-standard manner.  The project will be something that another .NET developer can look at and understand.

Ruby developer

Ruby got very popular with Ruby on Rails, so I would expect a Ruby developer to be a web developer with a firm grasp of MVC.  Due to the relative youth of Ruby's popularity, I would expect a Ruby developer to have worked in an environment where using newer technology is more important than established solutions, so they are likely to be aware of what is going on in the software industry.

Javascript developer

5 years ago, I would have thought a Javascript developer has the ability to help make webpages look nice, such as changing the colour of a button when hovering over it.  Now that webpages have become increasingly complex, front-end javascript is no longer trivial.  Frameworks such as javascriptMVC have appeared to help front-end developers.

Now, with the rise of node.js, I would not even make the assumption that a Javascript developer primarily works on front-end code.
Like the Java ecosystem, Javascript is becoming the "bytecode of the web", in that there are languages that will compile to javascript.   These are gaining popularity, such as coffeescript, dart and even clojurescript.

Similar to the Java developer, a Javascript developer could have experience of large projects, making decisions on libraries and frameworks to use, test-driven (with mocha, for example) with the architecture of the system in mind.


Here at Digital Morphosis, we do not consider ourselves any of the above.  We are Software Developers.  Rather than falling into any of the categories above, we write maintainable code targeted to the task at hand.  We are polyglot programmers, using the best tool(s) when they best suited.

What this means for clients

The old saying says "If you only have a hammer, you tend to see every problem as a nail".  Without being restricted by a language or framework, we are able to think about solutions that more restricted developers might not be open to.

Tuesday, 27 November 2012

XStream-ly annoying

For many of our RESTful web services, we use XStream to serialise and deserialise to and from XML.
Unfortunately, XStream does not play nice when it encounters something it does not recognise.

Say XStream is used to serialise a class of the following form:

public class BoringClass {
 private String something;
 public String getSomething(){
  return something;
 public void setSomething(String something){
  this.something = something;

To the following XML:


That's all very well, but if you want it to deserialise the following to an instance of BoringClass:


Rather than ignore the unrecognised element, it throws an exception.

"Why are you sending it that, then?" I hear you ask.  Well, if something new is added to the domain model in a service, a client of that service that uses XStream to deserialise the responses will need to know about the changes that have happened to that domain object (rather than just ignoring the additional elements in the XML).
This meant that an update of one service turned into an update to 10+ applications, even though most of these do not need to deal with this additional detail on the domain object.

After spending a little time looking into the problem, there are things that can be done, but they seem to get complicated rather quickly.  So much so, that manually creating and parsing XML with Dom4J appears more appealing than faffing with the internals of XStream!

Therefore, we are now concerning any reliance on XStream to be technical debt.

JSON is cooler than XML anyway.

Monday, 15 October 2012

Don't track me now, I'm having such a good time!

The concept of online privacy has been in the news extensively in the last few years, with first the introduction and then the implementation of the EU Directive on Privacy and Electronic Communications. We of course covered this extensively through our Cookie OK website.

One of the provisions of the Directive - which covered a wide swath of scenarios yet ended up being colloquially referred to as the 'Cookie Law' - was that it should be up to an individual user as to whether their information was shared with websites. With browsers lacking the ability to give the users this choice directly it was left to the developers of individual sites to implement, in a range of differing ways, methods of gaining the users' permissions to allow their progress to be tracked for analytical, personalisation and advertising purposes.

There has been another solution in the background for the last few years, in the form of a 'Do Not Track' option that both the major browser vendors (Microsoft, Google, Mozilla, Apple and Opera) and web server vendors (primarily Microsoft and Apache) aim to support. The history of how this option evolved from the original discussions in the US is an interesting but complex one, and outside of the scope of this post; it is best told by one of those originally involved, Christopher Soghoian. The purpose of the option, on the other hand, is much more straightforward:
"Do Not Track is a technology and policy proposal that enables users to opt out of tracking by websites they do not visit, including analytics services, advertising networks, and social platforms. At present few of these third parties offer a reliable tracking opt out, and tools for blocking them are neither user-friendly nor comprehensive. Much like the popular Do Not Call registry, Do Not Track provides users with a single, simple, persistent choice to opt out of third-party web tracking."

The problem with such a simple technology as this - implemented, in theory, as an "on/off/no preference" switch on a browser - is that its effects can be widespread amongst a variety of other organisations. To a user, switching "Do Not Track" on implies that websites that users visits in the future will not track them; a setting that, as it is within the browser, is significantly more far-reaching than a simple cookie. To a website, that same switch may only apply to certain types of tracking. As Ed Bott reports, the Direct Marketing Association is lobbying for "Marketing" to be added to the list of organisation types allowed to track users:
"Marketing fuels the world. It is as American as apple pie and delivers relevant advertising to consumers about products they will be interested at a time they are interested.  DNT should permit it as one of the most important values of civil society.  Its byproduct also furthers democracy, free speech, and – most importantly in these times – JOBS.  It is as critical to society – and the economy – as fraud prevention and IP protection and should be treated the same way. "

The whole situation took a further turn to the ridiculous in June 2012, as it was discovered that the DNT option would by default be switched 'On' in Internet Explorer 10. As the whole point of DNT is to promote consumer choice, by doing this arguably there has been no actual choice made by the consumer. With this as a justification, a patch was submitted to Apache that caused it to completely ignore the DNT option when a user was using Internet Explorer 10. The patch was not added to the code, but the point was made, and many believe that Microsoft are taking away choice rather than providing it.

The Digital Advertising Alliance further stated that organisations are free to ignore DNT without fear of sanctions:
"Specifically, it is not a DAA Principle or in any way a requirement under the DAA Program to honor a DNT signal that is automatically set in IE10 or any other browser.  The Council of Better Business Bureaus and the Direct Marketing Association will not sanction or penalize companies or otherwise enforce with respect to DNT signals set on IE10 or other browsers."

Ultimately, the DNT is a good concept, and implemented correctly and responsibly by all parties it would almost certainly have allowed the avoidance of many years of doubt, website re-engineering, and of course all the associated costs, that the implementation of the 'Cookie Law' introduced. As things stand now, it has been hijacked by too many third parties to be anything other than an interesting blip in web browser history. Returning back to Christopher Soghoian's original account:
"If industry (or the FTC, Commerce and Congress) ultimately settle on the header based approach, there will likely be an intense lobbying effort on industry's part to define what firms must do when they receive the header. Specifically, they will seek to retain as much data as possible, even when they receive the header. As such, the devil will be in the details, and unfortunately, these details will likely be lost on many members of Congress and the press."

Tuesday, 7 August 2012

Problems with Grails 2.0.4

The Background

We have a complex grails project consisting of a base plugin called dm-cms to render content retrieved from a content-management service.

This plugin consists of among other objects a controller called PageController (under package, a service called PageService (under package and a UrlMappings called CMSUrlMappings (under the default package).

This plugin is installed on an application that acts as the front end to the users.

We then have another plugin called cms-admin which has dm-cms installed and this is installed on an appllication used by the administration to edit content.

This plugin will proxy the content from the front end application and render this to the screen with options to edit and publish changes, while allowing you to revert to previous versions of the page.

This plugin consists of among other objects a controller called PageController (under package, a service called PageService (under package and a UrlMappings called CMSAdminUrlMappings (under the default package).

We have similar URL mappings in each of the mappings objects set up in the plugins below are shown as examples:

From CMSUrlMappings:

"/" {
    action = "view"

From CMSAdminUrlMappings:

"/" {
action = "view"
url= "index"

Each of these need to point to the PageController defined in scope of the project.

These plugins and applications were built using grails 1.3.7. In order for us to provide some updates and new functionality to the client we decided to upgrade these to Grails version 2.0.4 (at this point the latest stabel release)

We had preiviously upgraded other applications for different clients from 1.3.7 to 2.0.1 without hitting any problems.

The Problems

During the upgrade from 1.3.7 to 2.0.4 we seem to have stumbled upon two different issues.

1. The first issue is centered around the overriding of URL mappings from between each plugin. From running some basic commands agains the admin application and looking through the debug logs of grails it seems as though when the mappings are resolved they are added to a list of mappings and then requests are matched against the mappings sequentially. Once a request is matched against a mapping the request mapped and is forwared to the controller. The prblem seems to be that if you have two mappings the same as above there is no way of defining which mapping overrides the other.

My first thorugh was to set the loadAfter property in the GrailsPlugin.groovy but this seemed to have no affect.

As a side note the mapping itself does point the request to the correct controller in the scope of the project the UrlMappings object exists, ie. The CMSAdminController mapping points to the PageController in the CMS-Admin plugin.

This was not a problem under the grails version 1.3.7 therefore I believe there may have been changes in the implementation of the UrlMappings between 1.3.7 and 2.0.4.

2. The second issue is the creation and resolution of services within the grails controllers.

When a request is made to the CMSAdmin PageController and there are no conflicts in the UrlMappings wehave hit a problem whereby the PageService defined in the PageController is not of the correct type. In this instance the PageService defined in the ProductController (under package is the PageService defined in the dm-cms plugin (under package

Interestingly neither of these problems arise if the application has the cms-admin plugin source linked. I assume this is because it treats the source linked plugin as an extension of the source of the application itself, therefore treating the compilation and bean registration differently.

The Next Step

In order to establish this theory as a possible issue with the underlying framework itslef I have decied to create two different experimental applications to replicate the issues. Each application will install a plugin that itself has a plugin installed. I will attempt to replicate each problem seperatly.

By recreating the problem this way I can prove that each issue is present without other code interfering and causing unexpected results.

The Process

Issue 1.

I created a plugin called exp-base. Ths contained a controller called "PageController" in package, a view for that controller and a URLMapping as follows

"/" {

All the controller did was render the view and the view contained an h1 with the name of the plugin.

I then created another plugin called exp-admin which contained a controller called "PageController" in package, a view for that controller and a URLMapping as follows

"/" {

Again all the controller did was render the view and the view contained an h1 with the name of the plugin.

The plugin exp-admin had exp-base installed.

I then created an application with exp-admin installed as a plugin and started this application up.

Issue 2.

I created a plugin calles exp-service-base which contained a service called PageService in a package This had a method getOutputString that returned "BaseService"

I also created a plugin called exp-service-admin. This contained a service called PageService in a package This had a method getOutputString that returned "AdminService".

Exp-service-admin also had a controller called AdminController that made a call to the getOutputString method in the pageService and put that the rerturn in the model. This then rendered a view that output the retrun from the getOutputString in an h1.

The plugin exp-service-admin had exp-service-base installed.

I then created an application with exp-service-admin installed as a plugin and started this application up.

The Result

Issue 1.

When accessing the root URL (the one for which I set up the URL mapping) I was showing the name of the exp-base. From examining the logging what seems to happen is that on startup the URL mappings are read and put in a list. Then on a request the list is read and the all matches to the URL of the request coming in are then placed in a list.

It then looks like that list is read and the first mapping in the list is handled by creating the controller bean and then delegating the handling of the request to that controller.

Issue 2.

When accessing the AdminController which calls the service I was shown the "BaseService" string. This means that the wrong service bean is created and added to the controller.

Possible Solutions

Issue 1.

Through trail and error it seems that the way of resolving this was to add the URL mappings into the application itself. For future applications that use the plugin the URL mappings will need to be added to the application itself.

Another option is to rename the controllers and change the URL mappings in to point to the renamed controllers.

Issue 2.

A possible solution is to define the bean for the service required in the application within the applications resources.groovy. This would cause all autowired pageServices in the application to use the bena defined in the application.

The other option is to rename the page service and all the calls to it in the application.

Thursday, 14 June 2012

Can you read your own code?

I have been reading a lot lately into how to produce better quality and code of a more professional standard.

A lot of focus is put on the readability of the code especially to future developers who may be performing maintainence.

I was recently working on a project written in grails and specifically on a service called TranslationService. This is a fairly simple service which wraps a call to the internal grails messageSource object. Within this service we resolve a locale from a locale resolver that we have written and pass this through to the messageSource object.

We wanted to change this to allow us to pass through a context and from this context retrieve a locale resolver from a map while defaulting to the original locale resolver we had defined as a property of the service. So while pair programming with a grails and groovy newbie we began.

The original line of code before it was changed is shown below.

"def locale = localeResolver.resolveLocale(request)"

The following line of code is the result of our efforts.

"def locale = ((localeResolverMap ?: [:])[context] ?: localeResolver).resolveLocale(request)"

We primarily develop using a TDD method and our initial effort had most of the logic being performed in another private method on the service. With tests written and the functionality correct we began to re-factor and the line of code above is the result.

At first the final result was very pleasing to the eye. From a method which initially was about 5 - 10 lines in length we had re-factored to one line. Knowing it worked it has been deployed. Since then I have been thinking about the readability of this line.

We called over an experienced grails/groovy colleague and asked him to try and explain without any context what the line of code did. While he did work it out it was not immediately obvious.

Effectively we have a line of code that while pleasing to the original author and ultimately readable by experienced developers it would be slightly more difficult to a novice developer or someone trying to make a quick fix a year from now.

So what are the options?
  • We could leave it as it is and to hell with the future developers. If they are not skilled enough to work out what the code is doing then they should not work on the project itself.
  • We could comment the line of code to explain what we are trying to achieve.
  • We could factor the line of code out into a method and give it a clear descriptive name.
  • Where a clear and descriptive name is not available or feasible we could factor out into a method and comment the method itself.

Each of these options can be performed without too much work but which do we go with?

The first option of leaving it as is is not the best approach. While the possibility of making another developers job harder it has the possibility of myself returning to this code in a year or two and having to work it out myself. I for one am all in favour of making my life easier. In this instance the code is not too complex and people who can program in groovy should be able to work it out, but I am using this example to make a wider point. So for arguments sake lets say this option is out.

Commenting single lines of code is out too. I don't like this approach. While I agree comments should be used, I think they should explain a class or method and not individual lines of code.

Factoring code out to a single method with a clear and descriptive name. Sounds perfect. But what is that line of code doing? In essence it resolves the locale. But it does more than that. It gets the locale resolver based on a context and from a map if that context is a key in the map or the default locale resolver. Not as easy to get that in a method name that is clear and concise.

So we are left with a method with a clear and concise name and a comment on that method. This ties in nicely. We still get to keep out nice, complex and tested one line of code while still allowing all future developers an easy job of working out what it does.

At the end of the day the code we write is an evolving entity, and making the job of developers extending, modifying or improving this code easier is something we need to be thinking about very seriously. All too often do we hear everyone in the office complaining about the code they are having to read through. So from now on I am going to make a real conscious effort to improve the readability of mine.

Domain Name Optimisation - the New SEO

When the Internet Corporation for Assigned Names and Numbers (ICANN) was formed in 1998, it was given responsibility for overseeing the workings of generic domain names such as .com and .net, as well as country-specific domains such as .uk, and ensuring that control was delegated to the appropriate organisations. As a simple example, the .uk domain is actually managed by Nominet in Oxford.

In 2011, ICANN's directors voted to allow any organisation willing to pay $185,000 up front (and a further $25,000 per year) the ability to register their own generic and regional top level domains. The list of all applicants was released on June 13th 2012, and contains a number of applications that were to be expected:

MICROSOFT Microsoft Corporation
APPLEApple Inc.
IBMInternational Business Machines Corporation

and a lot more that weren't:

PIZZAFoggy Moon, LLC
Asiamix Digital Limited
Uniregistry, Corp.
Top Level Domain Holdings Limited
MUSICDotMusic Inc.
DotMusic / CGR E-Commerce Ltd
CY/AP Community
dot Music Limited
Amazon EU S.à r.l.
Victor Cross
Charleston Road Registry Inc.
.music LLC
Entertainment Names Inc.
SEARCH Charleston Road Registry Inc. 
dot Now Limited
Amazon EU S.à r.l.
Bitter McCook, LLC 

These are all very generic terms, ones that you could imagine yourself searching for rather than visiting a known site. Check on your browser: that searching can just be performed by typing the word or words into the URL bar, which sends the user to a search engine showing those terms. Try it now - just type "pizza" and see what happens. Me? I get Dominos as my top hit.

But there is no technical reason why one of these companies, once they've gained control of this top level domain, cannot run a website on just that name:


This is exactly the way thousands of companies' internal websites work. When a single word is entered into a URL bar the browser will first check to see if that is a valid website; only if it isn't will it pass control over to the search engine.

These new domains will allow registrars to take users - their potential customers - directly to their own websites when they put "pizza" into their web browsers, cutting out the search engine entirely. This isn't Search Engine Optimisation, this is Domain Name Optimisation, and for $185,000 it is almost certainly worth every last cent.

Thursday, 17 May 2012

The Old Ones are the Best

It is very common for a new technology to appear in the development world. Slightly less common is the entire development community turning round and adopting it wholeheartedly. And when they do, it's often touted as the 'next new thing' to adopt.

MVC (model, view, controller), as a way of separating logically distinct areas of code within a software development project, exists within a vast number of languages and frameworks and has come from almost nothing over the past ten years. In the Microsoft world it has heralded a complete paradigm shift away from the more traditional Web Forms development model to one focussed around demarcating code across strict lines between the user interface, the data and the 'controllers' responsible for channelling information between the two.

Like most developers working with Microsoft's tools, my first awareness of MVC came when Microsoft - slightly late to the party - launched their implementation in 2009. It was only when their third version launched in 2011 that it really took flight on their platform, primarily (I believe) due to Razor, a language designed specifically for writing user interfaces.  For most developers using .net languages it brought about a massive change in thinking about how applications work; and yet, it shouldn't have done. MVC has existed for many years, even if the name is a new one to most people.

It wasn't one of the design patterns championed by the Gang of Four in their seminal 1994 book, but if I recall the structure of some code I wrote back in the 90s, I was already using a pattern not completely dissimilar to MVC: a Java applet for the user interface, a backend database for storing data (I forget which), and a set of Perl CGI scripts for passing data back and forth; at the time, there was little choice. I certainly didn't invent it, nor these techniques, so how 'new' actually is MVC?

In fact, MVC - as a defined term, not even a concept - can be traced back even further than this. The credit truly belongs to Trygve M. H. Reenskaug, who writes:
"I made the first implementation and wrote the original MVC note at Xerox PARC in 1978. The note defines four terms; Model, View, Controller and Editor . The Editor is an ephemeral component that the View creates on demand as an interface between the View and the input devices such as mouse and keyboard."

We strive to keep our technological skills as up-to-date as possible, but there are times that the old ideas are the best; our skills are in recognising the appropriate technologies for each project we work on. If that means using development technologies older than some of the developers themselves, then so be it.

Friday, 11 May 2012

Yii - changing my perceptions of PHP

For a recent project we were required to write an application in PHP that communicates with RESTful web services.  This meant that the site itself did not require a database.  I had limited experience of PHP, and did not have the best impression of it as a language.

We looked into PHP frameworks and decided to use Yii.  This is primarily due to the following reasons:

* MVC design pattern.
* Testability (both functional testing and unit testing) is considered from the start (with phpunit).
* Although Yii does ORM, and magic like "generating classes from your database tables" it does not REQUIRE using a database the way some frameworks do.
* Widely used and relatively well established.

The result is a very fast application, with thin controllers and a service layer (services as Yii ApplicationComponents).  We have very high unit test coverage in the service layer, and we have a suite of functional tests.

Using Yii has resulted in a well-structured application that benefits from the framework, rather than the framework getting in the way.  When looking up how to do things "the Yii way" I always reacted with "that makes sense".

I feel this experience says something about PHP as a language.  It is generally-considered that the language is not as elegant as others, and it is a scripting language at heart.  The syntax often feels clunky and functions are named and used inconsistently.  Yii encourages you to write PHP in a way that felt like I was writing Java or Groovy.

This is similar to my experience with javascript.  Historically used just for some UI magic, and there are a lot of gotchas to the language, yet when used correctly it can result in well-written non-trivial applications.

Monday, 23 January 2012

Grails plugin: use the "loadAfter" property

I recently wrote a plugin that will extend the class of one of Grails' beans, and replaces Grails' bean with itself. However, I ran into a funky error where sometimes my bean had replaced Grails', and at other times the Grails default bean was in the context.

Turns out that we can use the "loadAfter" property in the PluginNameGrailsPlugin.groovy class.

By specifying a list of plugin names here, we can control the order that grails will load your applications. In my instance, my plugin was sometimes being loaded before the controllers and groovy pages plugins. I managed to consistently get my plugin to load after these two using the following:

def loadAfter = ['controllers', 'groovyPages']

See the official documentation for more information.

Thursday, 12 January 2012

Migrating a Grails app from 1.1 to 2.0

We're undertaking the task of upgrading a rather large Grails 1.1 plugin to Grails 2.0, and the apps that depend on it. We shall document any oddities and issues that we encounter along the way.

| Name | Files | LOC |
| Controllers | 58 | 3903 |
| Domain Classes | 26 | 606 |
| Services | 27 | 2626 |
| Tag Libraries | 11 | 2755 |
| Groovy Helpers | 48 | 2030 |
| Java Helpers | 1 | 278 |
| Unit Tests | 48 | 5632 |
| Integration Tests | 2 | 61 |
| Totals | 221 | 17891 |

should be fun!

Wednesday, 11 January 2012

Extending Grails' beans with your own subclass

We've just completed a small Grails 2.0 plugin. One of the things this plugin does is to replace Grails' "groovyPageLocator" bean with our subclass implementation that changes the way the class behaves slightly. It was a little tricky to get my subclass injected with its dependencies also injected correctly, so I thought I'd write up a quick blog to document how we did it.

When it came to defining our bean, I basically wanted to override the groovyPageLocator, defined in "doWithSpring" in the GroovyPagesGrailsPlugin. I managed to do so by basically copying/pasting the bean definition and to change the bean class - but this seemed a nasty hack. If, in future Grails versions this plugin changes, we'd have to mirror the change in our, y'know, code duplication and all.

As the doWithSpring closure is just setting up the template for Spring to wire-up our beans, we can fiddle around with the beans here to achieve what we want.

In the end, here's the code we used to inject our own subclass in place of Grails' bean:

def doWithSpring = {
def pageLocator = delegate.getBeanDefinition("groovyPageLocator")
pageLocator.beanClass = PluginAwareGroovyPageLocator
pageLocator.propertyValues.add("order", "someValue")

So, we're just grabbing the existing bean from the definition, changing its class to point to my new class and adding in a custom property that's specific to my subclass. Lovely!

Wednesday, 4 January 2012

Taglib dependencies in Grails 2 unit tests

I'll essentially be reposting information that I found at Ad-Hockery (an excellent blog, by the way), but it was an annoying issue that took me some time to get working, so it's worth reposting.

Given a Grails controller as such:

class SomeController {
def something

def index() {
render something.method()

And its test:

import grails.test.mixin.*

class SomeControllerTests {

void testIndex() {
// given
controller.something = [method: { "hello!" }]

// when

// then
assert response.text == "hello!"

All works fine. So, given a very similar situation for a taglib:

class SomeTagLib {
static namespace = "blargh"
def something

def index = { attrs, body ->
out << something.method()

// ------

import grails.test.mixin.*

class SomeTagLibTests {

void testIndex() {
// given
tagLib.something = [method: { "hello!" }]

// when
def output = applyTemplate("<blargh:index />")

// then
assert output == "hello!"

This would not work at - it was throwing a NullPointerException in the taglib itself, indicating my 'service' had not actually been wired in correctly.

Thanks to the blog post over at Ad-Hockery, I got this to work in the end by modifying my test as such:

import grails.test.mixin.*
import org.junit.Before;

class SomeTagLibTests {

SomeTagLib tagLib

void setUp() {
tagLib = applicationContext.getBean(SomeTagLib)

void testIndex() {
// given
tagLib.something = [method: { "hello!" }]

// when
def output = applyTemplate("<<blargh:index />")

// then
assert output == "hello!"

I'm unsure about submitting a JIRA report for this as it may be expected behaviour; personally I'd expect the taglib tests to work as the controller ones do. hmm.

Tuesday, 3 January 2012

Tunnelling to CloudFoundry services with Ubuntu

Have been waiting for a way to access my mongodb instances on CloudFoundry without having to access it via code, so was pleased to see this blog post on CloudFoundry (

However, after following the instructions I had a few issues getting it running on Ubuntu so here are some notes of what I did.

Trying to install the caldecott gem with
sudo gem install caldecott
I got the following error:
ERROR: Error installing caldecott:ERROR: Failed to build gem native extension.
/usr/bin/ruby1.9.1 extconf.rb...
makeg++ [...]make: g++: Command not found
I took a look and on my system I do have a copy of g++ (binary called g++-4.4) but wasn't aliased to g++. A quick google got me here

The issue was easily solved by running
sudo apt-get install build-essential
which configured all the relevant bits and pieces.

Then back to the install of caldecott itself which then failed again with a complaint that rack wasn't installed (dependency of sinatra) so a quick "sudo gem install rack" sorted that.

So once that was sorted it was time to try it... except it didn't work. Ran the following:
vmc tunnel [database name]
and got the following error back:
To use `vmc tunnel', you must first install Caldecott:
gem install caldecott
Note that you'll need a C compiler. If you're on OS X, Xcode
will provide one. If you're on Windows, try DevKit.
This manual step will be removed in the future.
Error: Caldecott is not installed.
Back to Google and found this question posted to the CloudFoundry newsgroup ( which suggested installing bundler with:
sudo gem install bundler
And finally it all works as advertised.