The tech behind the hack

How we achieved a smooth and successful hackathon project for Smarter Travel LIVE!

Published on 18th March 2016 by Chris Sherry

This post is about the tech stack we used for the Smarter Travel LIVE! hackathon. Want to learn more about what we made? Check out Jon’s post.

Last weekend the Base team headed off to London on a misty Saturday morning to take part in the Transport Hackathon. The event was kindly sponsored by WSP Parsons Brinckerhoff and for a complete overview of the weekend I recommend spooling through the live blog they recorded of the day.

The base hackathon team take a quick coffee break

Photo Creative Commons by Manuel –

Some of the leading developers of transport solutions entered the competition with teams from Cycle StreetsFaster Route and a combined team of IBM/TransportAPI – the later of which were providing assistance to all the teams to help them use their transport data API.

Building a successful project at a hack weekend is down to the decisions you make before and on the day.

You can choose to do one of two things at a hackathon – learn something, or make something.

You can’t do both. I have had success at previous hackathons by sticking to this rule – in the past have either; built something I have built before, but using a new technology I wanted to learn, or I have built something new using technology I was mostly familiar with. Of course this wasn’t always the case and I learnt this rule the hard way, having reached the end of my first few hackathons with nothing to show and not much learnt either.

For this particular hack we were playing to win, and although there is always a learning aspect to everything we do, the best decision we made was to stick to our strengths.

“Capturing daily travel disruption data and referencing it to existing transport Open Data” was our chosen challenge. This was right up our street (pun intended) as we knew we could use our experience working with real time bus data to create a solution that was both innovative and viable.

Our team of 6 included a designer, 2 app developers, a frontend developer and R&D engineer Manuel, who worked closely with me to develop the backend solution for our hack. The majority of the team we took were user-centric – so we decided that approaching the challenge at high level would maximise the solution we could achieve.


This challenge was also selected by the IBM and TransportAPI team. It was great to have another team attack this problem, as they had a different set of skills in the room and were likely to have entirely different approach to the same brief. I expected their solution to involve innovation at a lower level and I was both correct and thoroughly impressed! Using IBM Watson – a natural language processing platform – they focused on interpreting tweets about disruptions, calculating the mood from the tweet and using the location data from the tweet to associate this to a particular bus route.

Not having much experience with natural language processing and knowing that it is a particularly tricky problem to solve, our solution was a little more direct, but in my opinion equally as valuable.

Elgin Roadworks Website

Elgin Roadworks

Using the Elgin roadworks API to locate disruption data, and combined it with crowdsourced disruption reports from commuters to gathered data into a standardised format. With a little help from Jonathan Raper of TransportAPI, we were able to get bus route data via the Urban Data Store. This is a Future Cities Catapult working alongside TransportAPI to provide unrestricted access (no API key needed) for non profit and app research and development purposes. Using this bus route data we identified bus routes that were likely to be impacted by the disruptions based from the location and outputted this into an API that was used by our mobile apps to display the data in a user friendly way, but also this API could be integrated into other solutions.

A member of the IBM team commented that our hack looked like “it could be used today” and that the API “could be implemented and be really useful” which was testament to the friendly and collaborative environment on the day. It shows how by playing to our strengths we produced a solution that both matched the brief and was presented as a polished solution.

Behind the tech

As discussed, our choices of both which challenge we tackled, and the approach we took to solving the challenge were both key decisions, but there is another area which was instrumental in enabling us to build our hack rapidly and (mostly) unhindered by technical problems – our development tools.

A bad workman always blames his tools

My failings at previous hack events have often come about from using the wrong tool for the job. This time around the tools we chose helped us to be good workmen, and so I think they deserve some praise.

Development Environment

Jon and myself had macbooks running different versions of OSX and Manuel had a laptop running unix. The server we had created to host the hack was running ubuntu. Vagrant allowed us to all create an identical Virtual machine running ubuntu to work from, which meant all our development and testing was consistent with ultimately the machine the code we were were writing needed to be run on. Docker is another tool we could have used but not all of us had used it before so we stuck to the tool we all knew and liked.

The day before the hack I wrote an Ansible playbook to provision the server we would be presenting our hack on with a standard LEMP stack. This saved us a lot of time as once I had written the plays to be run against the server, that play was then run against mine, Jon and Manuel’s Vagrant managed virtual machines so we all had identical development environments. Puppet and Chef are alternatives which are arguably more powerful but also more complex. In my opinion Ansible is the best of the three.

Ansible Playbook

Running an Ansible Playbook

Database Management

We had a few options as to how to manage our databases. The simplest solution was to write PDO queries to insert and update records in our SQL database, and for the most part this would have been just fine. However with three of us working on web application of the hack, whenever one of us adds a new table or updates a field, the others would need to run manual database comments on their virtual machines to keep these inline and inconsistencies could creep in – it was likely we would be doing this a lot as the hack evolved and our solution revealed itself. This is where Doctrine came in, and although there was a little bit of work to set it up, the power it gave us to generate entities from our mapping files, control the structure of our databases, and expose an ORM meant we almost forget there was a database there at all!

Smarter Travel Live Code

A Doctrine Mapping

Application Framework

The hardest choice was around which framework we were going to use. We knew we wanted to use an ORM so Symfony and Laravel were obvious choices. Laravel is great for rapid development and has it’s own ORM in Eloquent, but neither myself nor Manuel had used it recently. Symfony was much more familar to us but we felt it was overkill for what we needed and it doesn’t work straight out of the box, there’s a lot of configuration needed before you can really get started. We wanted to hit the ground running but also be able to integrate Symfony components such as Doctrine into the app easily. This is where Silex was perfect. As a micro framework, it gave us the routing we needed to get started quickly and had several service providers so we could plug Doctrine and the Twig templating engine straight in without fuss.

Continuous Integration

Jenkins Pipeline

Jenkins Build Pipeline

At the start of the first day of the hack we started deploying our code to our server by connecting to the server and pulling the latest code from our repository. However this quickly became a repetitive task that we could have done without. By the second day we had set up a deployment pipeline in Jenkins which meant every time we merged code into the master branch, Jenkins like the good butler that he is, would re-run the Ansible playbook to update the server with any provisioning changes, deploy the latest code, and also run some smoke tests to ensure that the deploy had been successful. This didn’t take long to set up and on the second day the pipelines had been run 23 times, which therefore saved us typing 69 commands!

Awaiting judgement

So all that’s left to do now is wait and see what the judges think of our hack. All of the teams have had the rest of the week to tweak and polish as they see fit, however we haven’t needed to beyond adding a little extra error reporting so we can ensure the demo runs smoothly. In my opinion, the fact that we haven’t needed to add anything new to the hack, in order to make it presentable, since the weekend means that regardless of the judging results, our hack was a success!

Published on 18th March 2016

Web App Engineer

Chris Sherry