MapAction: The Humanitarian Mapping Charity

Aside from the work I do here at thinkWhere I also volunteer for the humanitarian mapping charity, MapAction. This year MapAction are fortunate to be part of the BBC Radio 4 appeal and have our message presented by Alexander Armstrong (Presenter of Pointless, amongst other things) which will air on Christmas Day at 7:54am and 6.56pm and on 29th December at 3:27pm. As part of the drive to raise awareness of MapAction, thinkWhere has posed some questions to me to find out more about what it’s like being a MapAction volunteer…


Can you tell us a bit about what MapAction does and how it helps during humanitarian crises?

MapAction is a humanitarian mapping charity that works through skilled volunteers. Its aim is to save lives and minimise suffering by making the response to humanitarian emergencies as efficient and effective as possible through the use of maps and spatial data.

When a disaster strikes, people’s lives can be destroyed within a matter of seconds. In a landscape transformed by the disaster, the challenge for responders is to know where to start.

MapAction deploys skilled volunteers to the scene of a disaster within hours of an alert. In the acute phase of a response, our mapped analysis helps coordinate search-and-rescue efforts and the delivery of emergency aid. As the situation on the ground evolves, it helps responders understand the changing needs of survivors.

As a long-term measure we also help to prepare government authorities, responders and communities in vulnerable countries for the impact of disasters before they strike.

How did you get involved in becoming a MapAction volunteer?

My older sister Anne has been a MapAction volunteer for the past 11 years and has deployed to numerous emergencies including Pakistan (2005 earthquake), Indonesia (2006 earthquake), Tajikistan (2006 capacity building), Pakistan (2009 IDP crisis), Haiti (2010 earthquake), Japan (2011 earthquake), Sahel (2012 nutritional crisis), Liberia (2014 Ebola outbreak) and Nigeria (2016 preparedness mission).

Consequently over the last 11 years I’d heard lots about what it meant to be a MapAction volunteer, how much difference MapAction is able to make during an emergency and how rewarding being a volunteer was. MapAction as an organisation is also very well-known and respected within the wider GIS community.

Given I chose to follow a similar GIS career as my sister I was therefore also driven to become a volunteer, so eventually applied during a recruitment drive and was very pleased to be successful and able to join MapAction myself in 2014.

The MapAction team at a recent disaster simulation exercise

What kind of training do you get to prepare you for any missions you might get sent on?

MapAction volunteers are expected to participate in at least seven training weekends a year, keeping us up to date with the latest developments in the humanitarian and technical community.

Every year MapAction also runs an intensive simulation exercise, focusing on developing the skills we need to deliver our mapping service in the potentially challenging context of an emergency response.

Our training helps us make judgement calls about what sort of map products we should create given the time available and the volume of rapid requests coming in.

Recognising that the world is dealing with more conflict-related emergencies, MapAction has also boosted its security training so we can deploy safely to environments where the situation can be highly unpredictable.

Training involves simulations that enable us to ‘learn by doing’ in a practical and safe environment. This has included practising first aid, emergency communication procedures and negotiating with combatants. We hope none of this will ever be needed, but MapAction make a point of always preparing the team for all eventualities.

How do you balance your time between volunteering and working at thinkWhere?

MapAction is a big commitment. Aside from deploying to the field for emergencies we also provide remote support to emergencies when required, as well as actively training together as a team throughout the year, so the time commitment is significant.

Luckily I have a very understanding girlfriend (thanks Maddie!) and a very flexible employer in thinkWhere. thinkWhere are great at accommodating my MapAction training and volunteering around my work commitments, such as recently allowing me the time to deploy to Northern Iraq. Without this flexibility my volunteering with MapAction wouldn’t be possible.

Team thinkWhere raising money for MapAction by completing Total Warrior in 2015
Team thinkWhere raising money for MapAction by completing Total Warrior in 2015

Where in the world does your volunteering take you?

MapAction deploy to emergencies all around the world, a list of which can be found on the website here: MapAction doesn’t currently have any teams engaged in active responses, however since September we’ve responded to a number of high profile emergencies including the conflict related crisis in Nigeria affecting as many as 14 million people and Hurricane Matthew in Haiti, Jamaica and the Bahamas.

In November I also went on my first deployment with a fellow volunteer Naomi to Northern Iraq to map aid needs and distribution as a result of the conflict related crisis affecting the region.

QGIS training in Erbil, Iraq
QGIS training in Erbil, Iraq

What (from your own personal experience) is the most rewarding thing about volunteering for MapAction?

The most rewarding thing is being able to directly apply the skills I have from my work life into a completely different context i.e. the humanitarian world, in order to make a difference and help those in need. Meeting and working with the rest of the MapAction team who also share the same drive and commitment is also very rewarding.

Have you ever met the Royal Patron of MapAction, Prince Harry?

I haven’t personally met Prince Harry, although he did recently attend ‘Triplex’ in Norway which is a major international humanitarian field exercise organised by the International Humanitarian Partnership (IHP), where MapAction volunteers were participating. Prince Harry has an active interest and great knowledge of our work having been MapAction’s Royal Patron for the last 10 years since 2007. Maybe I’ll meet him one day!

How can others get involved with MapAction?

MapAction will be recruiting a new intake of volunteers in the New Year, so anyone with the right technical skills and an interest in joining our close-knit team should keep an eye on MapAction’s website and social media channels for more information about the skills and commitment required.

It’s certainly not for everyone and requires a flexible employer, like thinkWhere, to be willing to release you at short notice. But the benefits you gain in terms of honing your professional skills, making a difference and meeting like-minded people make it incredibly rewarding.

How can people help MapAction without becoming a volunteer?

The single easiest thing people can do is donate. MapAction doesn’t receive funding from large public appeals, so we rely on the generosity of our supporters. Your donation means we can continue to be responsive, agile and ready for action at any time.


MapAction’s approach has a powerful multiplier effect. Every pound you donate will influence the allocation of many thousands, if not millions, of pounds of humanitarian aid. And that means many more lives potentially saved.

MapAction is often referred to as the “best-kept secret” of the humanitarian world. The video featuring Prince Harry on the MapAction website aims to raise MapAction’s profile, so more people will get behind this unique organisation and help get more aid to more people.

How can we find out more?

Visit and watch the Prince Harry video which brings to life the pivotal role that MapAction plays in coordinating humanitarian aid.

The MapAction BBC Radio 4 appeal being presented by TV personality Alexander Armstrong goes out on Christmas Day at 7:54am and 6.56pm.

You can find out more at You can also follow MapAction’s work and missions on Twitter @MapAction and @MapAction_maps.

AngularJS and OpenLayers: creating a scale bar module


We’ve recently released a new product called mapTrunk. The app is built using the open source libraries AngularJS and OpenLayers 3 (among many others!). As part of our development efforts we looked into creating reusable modules. This blog post offers a high level introduction to AngularJS and OpenLayers 3 and shows how they can work together to create a reusable map scale bar module example.


AngularJS and OpenLayers 3

AngularJS is an open source JavaScript framework for creating web apps. It provides tools for making apps modular. AngularJS handles binding data which means the view (HTML) automatically updates when the model (JavaScript) updates. Other benefits of AngularJS include form validation in the browser, the ease of wiring an app up to a backend and the testability of the code. AngularJS also lets you extend the syntax of HTML and inject components into your HTML. This feature comes in handy when creating the scale bar module.

OpenLayers 3 is an open source mapping library. It provides tools for adding dynamic maps to an app. Commonly used mapping controls provided by OpenLayers include zooming in/out control, a mouse position control and a scale bar control.

The following example shows how to create a basic map with OpenLayers and AngularJS. The result is a map and a button to recenter the map. It also shows the user how many times they have centred the map.


Firstly we need to include the AngularJS and OpenLayers 3 libraries, add a div for the map and add a button. We also need to include the Angular app called “app”, which is created in JavaScript.

<html ng-app="app">
    <link rel="stylesheet" href=""/>
    <a href=""></a>
    <a href=""></a>
    <a href="http://main.controller.js">http://main.controller.js</a>
<body ng-controller="mainController as main">
    <div id="map"></div>
    <button ng-click="main.centerMap()">Button</button>
    <div>You have centered the map on coordinate [0,0] {{main.counter}} times.</div>
</body> </html>


Here we create our own Angular controller; mainController, which initialises the map and contains the function which is called on clicking the button, updating the counter.

var app = angular.module('app', []);

(function () {

    'use strict';

     * The main Angular controller which initialises the mapping
        .controller('mainController', [mainController]);

    function mainControllerblockquote {
        var vm = this;
        vm.counter = 0; = new ol.Map({
            layers: [
                new ol.layer.Tile({
                    source: new ol.source.OSM()
            target: 'map',
            view: new ol.View({
                center: [0, 0],
                zoom: 2
        vm.centerMap = function () {
  [0, 0]);

Creating the scale bar module

The OpenLayers library already has a scale bar module called ‘scale line’ built-in. An example can be found here. One of the requirements for mapTrunk was to create a scale line that can display distances in two units at the same time, metric and imperial.

To create a reusable module we can create a custom Angular directive. Angular directives basically let us create our own HTML syntax and inject components by using that HTML syntax. It makes the HTML code easier to read and hides the complexity of the component. In this blog we’re not going to go into the details of Angular directives so please see AngularJS’s documentation on directives for a full explanation.

First we need to create the Angular directive and decide what the HTML syntax is going to be. In the code snippet below we called the directive scaleLineControl. This translates into the HTML tag . The directive needs to have access to an OpenLayers map object to be able to add a scale line control to the map. The map object can be passed into the directive by adding it as a property to the HTML ‘map=””‘. The OpenLayers scale line control needs a HTML target ID so this ID can be given to the HTML directive as well. The scale line control is added to the OpenLayers map object by using the addControl function. The units of the first control are specified as metric. To create a scale line module which also shows imperial measurements, a second scale line control is added to the map with imperial units. OpenLayers takes care of listening out for changes on the map and updates the controls accordingly. Now we should see two scale lines on the map, but they are positioned on top of each other so we need some CSS to fix this.

(function () {

'use strict';

 * @fileoverview This file provides a scaleLineDirective. 
 * It add a scaleLine showing both metric and imperial units. 
 * CSS is needed to display the elements nicely
 * Example usage:

        .directive('scaleLineControl', scaleLineDirective);

    function scaleLineDirective() {
        return {
            restrict: 'E',
            link: function(scope, element, attributes) {
                var attr = 'map';
                var prop = attributes[attr];
                var map = scope.$eval(prop);
                var scaleLineControl = new ol.control.ScaleLine({
                    target: 'scale-line-container',
                    className: 'scale-line-top',
                    minWidth: 100,
                    units: 'metric'

                var scaleLineControl2 = new ol.control.ScaleLine({
                    target: 'scale-line-container',
                    className: 'scale-line-bottom',
                    minWidth: 100,
                    units: 'imperial'

Making it look good!

We can specify the CSS class names when creating the OpenLayers scale line controls. By doing so we can customise the default look of the scale line controls. Here we have added the class ‘scale-line-top’ to the metric control and ‘scale-line-bottom’ to the imperial control.

#map {
    width: auto;
    height: 100%;
    position: relative;
    overflow: hidden;

#scale-line-container {
    border-radius: 2px;
    background: white none repeat scroll 0 0;
    bottom: 8px;
    left: 8px;
    font-size: 10px;
    position: absolute;
    z-index: 1000;
    padding: 5px;
    text-align: center;

.scale-line-top-inner {
    border-style: none solid solid solid;
    border-width: medium 2px 2px 2px;

.scale-line-bottom-inner {
    border-style: solid solid none;
    border-width: 2px 2px medium;
    margin-top: -2px;


The result is an Angular directive which can be injected into HTML and easily be used in other applications.


For a full working example, see this Plnkr.

Infinite QGIS plugin repositories in the cloud

[…we found it useful to make each version of the code available for installation and testing via their own QGIS repository…]

Here at thinkWhere we’ve recently released roadNet, a tool for managing the spatial database of road layouts, roadworks and roadside assets that local authorities use to create local street gazetteers and to plan maintenance and closures.  roadNet runs as a plugin on top of the excellent open source GIS package, QGIS.

During the build of roadNet, we found it useful to make each version of the code available for installation and testing via its own QGIS repository.  This post explains how it works.

roadNet manages the spatial and non-spatial data required to produce a BS7666-compliant local street gazetteer.  It features automatic line-splitting and updating multiple database tables in response to user edits as well as data validation and export in standard gazetteer data transfer formats e.g. SDTF.

git, GitHub, Shippable, Docker and Amazon S3

The roadNet continuous integration (CI) system makes use of a number of cloud-based services.  We use git and GitHub for version control to allow developers to track changes to the code and use separate branches to develop new features.  GitHub is linked to Shippable, which watches out for new commits to the code base.  This is similar to other CI systems such as Travis.  When the new code is committed Shippable spins into action.


Shippable is used to check the code and to create the cloud repositories.  The instructions to do this are stored in the shippable.yml file.  It does this inside a docker container, which contains QGIS and all its dependencies already installed and configured.

This stage was a bit tricky to configure because QGIS, as a desktop GIS application, assumes that it is running on a desktop computer with attached display to which it can send windows and message dialogs, when infact it is running on a little bit of memory and a little bit of hard disk on a big computer in a warehouse somewhere i.e. the Amazon Cloud.  The DockerFile contains the instructions to set up a fake display (or X virtual frame buffer) in the container.

Once the code has been tested a Python script pushes it out to the repository.

QGIS plugin repositories in the cloud

A QGIS plugin repository is just a web-facing folder that contains a plugins.xml file that describes the available plugins and a series of zip files containing the code.  Amazon Web Services includes the S3 service, which provides ‘buckets’ for storing files.  These can be configured to be accessible for the web, making them ideal for hosting repositories.

The script contains the instructions to zip up the files, prepare plugins.xml and copy the files to S3.  The core of the key function is below:

def update_repos(build_name):
    Create new repo for build, and update latest master/dev repos.
    :param build_name:

    branch = os.getenv('BRANCH')
    if branch is None:
    elif branch == 'develop':
    elif branch == 'master':

It is so easy to create repositories that we just make lots of them.  Every set of changes gets a build_name.  The first ‘deploy_to_s3’ line creates a repository specifically for that build.  These are all stored indefinitely.  This means that just by connecting to the specific repository, we can run the code as it was at any stage during the development.

The other ‘deploy_to_s3’ lines provide convenience repositories.  These get a copy of the code that was just pushed, meaning developers can connect to the latest_push and see their most recent changes.  thinkWhere’s testers can connect to latest_develop and try out new features as soon as they are merged into the develop branch.  Clients point their QGIS installs at the latest_master branch to ensure that they keep up with the latest stable releases.

Anyone can update to the latest version in their branch with a single click in the QGIS plugin installer.


We have found the automatic deployment of QGIS plugins to be immensely useful, facilitating both rapid development / testing feedback loops, and easy delivery of bug fixes and upgrades to the master branch. Check out roadNet today from the official QGIS plugin repository:

If you think this may be useful to you, please also have a look at the code – its all released under GPL v2.

Further reading

You can install the latest stable build of roadNet in your local QGIS machine by adding our latest_master repository to your installation (Plugins > Manage and Install Plugins > Settings):

Viewsheds and Visibility Analysis in QGIS

Recently I had the occasion to do a bit of viewshed analysis in QGIS. I have done these before using tools in ArcGIS, but this was the first time I had the pleasure of doing this kind of analysis in QGIS. I was impressed by the simplicity and flexibility of the toolset.

The views from our office at Stirling are fairly legendary, with Stirling Castle, much of the Ochil Hills and the Wallace Monument all visible from our windows. But how to quantify this – what can we see from our office? This is what a viewshed does – using a height model and a position on the map it will tell you what you can (theoretically) see.

As a starting point, we can use the Ordnance Survey’s 50m Digital Elevation Model: Terrain 50.

Scotland DEM

So now we have the heights of all the hills and glens of central Scotland represented in a raster.

The function we are going to use is the Grass r.viewshed function which, rather conveniently, does most of the heavy mathematical lifting for you.

The R.Viewshed function is a GRASS function which integrates nicely into QGIS allowing you to use the algorithm without delving into the GRASS interfaces.


So here we are using the Elevation Model called Terrain 50, the coordinates identifying the viewing position (thinkWhere’s office in Stirling).

We are on the first floor and without measuring the height of the building and our average employee standing up, we have estimated the height of the viewing point at 5m.

The viewshed analysis allows you to set up a height to offset the target elevations as well. This would be very useful for an analysis of windfarm visibility or  for a radio mast.

A good way to think about it is like this:

viewshed theory1

In this example, the viewer is looking to see what part of the landscape they can see. The height of the person is the viewing position and there is no height offset for heights on the landscape.

In this example, the viewing position is on the top of a hill and looking out at objects of a defined height.

viewshed theory2

So we can run this algorithm in QGIS to compute the theoretical view from our office.

The output looks a bit like this. Now it needs a little bit of processing to get it to something a bit more meaningful.

viewshed output

First of all, we’ll make this a vector for better symbology options. We can use the GDAL “Polygonize” function for this. The way that QGIS integrates these different applications into one place makes life so much easier for the GIS analyst.


Bringing the QGIS symbologies into play, we can show the viewshed like this:

viewshed output_backdrop

However, we can have more fun than that.

Projecting a hillshade on to the Digital Elevation Model and bringing in some Ordnance Survey Vector data allows us to produce something far more meaningful.


Using the hillshade analysis (that is the way to  create the really funky 3d effect on the landscape) and bringing in some of the Vector Map District from the Ordnance Survey you can bring this map alive. Using only a very small amount of the data and concentrating on the physical features a viewshed analysis can be easily understood.

Using the blend mode stops the viewshed from obscuring the underlying data in the hillshade raster, making the information easy to understand. A few labels from the VMD names dataset helps give a little bit of context.

Viewsheds are very heavily used in renewables, telecommunication, and planning. QGIS, along with Ordnance Survey Open Data allows this to be done by anyone with access to a computer.


Multi-Container Deployment with Docker Compose


Why Docker?

At thinkWhere we always aim to keep pace with latest tech-industry trends which is easier said than done in such a fast paced sector! However one unavoidable technology trend we’re now employing across our application deployment model is Docker. Docker is a software containerisation platform guaranteeing that software will always run the same, regardless of it’s environment. Docker offers many benefits over traditional application deployment, including:

Simplicity – Once an application is Dockerized, you have full control (start, stop, restart, etc) with only half a dozen commands. As these are generic Docker commands, it is easy for anyone unfamiliar with the specifics of an application to get started.

It’s already Dockerized Docker Hub is the central marketplace for Docker images to be shared with other Docker users. Often you find official Docker images for an application already exist or you can find another users efforts to build upon. Docker images we have used include MapFish Print and Nginx. We have also containerised our own flavours of MapProxy and GeoServer.

Blueprint of application configuration – A Dockerfile provides the blueprint or instructions to build an application. This can be stored in source control and refined overtime to improve the build. It also removes any ambiguity of build/configuration differences between various deployments.


Rapid to deploy – Having this blueprint for each application means that all we need is a server or virtual machine with Docker engine installed. This has drastically reduced the time spent deploying and configuring our applications.

Plays nicely with continuous integration – Amazon Web Services offer services dedicated to deploying and managing containerised applications. We recently constructed a Shippable Pipeline which builds and pushes new images to the Docker Hub repository as changes are merged into a code base. These new images are pulled down by the Amazon Elastic Container Service and deployed seamlessly. The ‘throw away’ nature of Docker lends itself to scalability, therefore services such as this can be scaled up or down with just a few clicks of the mouse.

Why Compose?

These days its rare to deploy applications which exist in a completely standalone context without the need to communicate with at least one other application or service. If all these applications happen to be Dockerized then Docker Compose is a great tool to create a multi-container application. In a nutshell, Compose lets us start/stop/build a cluster of related Docker containers.

The Compose File

In addition to the existing Dockerfile for each image, a single docker-compose.yaml file must be created for a Compose project. It’s here we define how the various containers will behave and communicate with each other.

This example compose file is from a Flask-restful application I wrote to serve GB Postcode GeoJSON from MongoDB. You can see it working and try your own postcode here.

This Compose configuration comprises three containers – a Python web application which talks to a Mongo database all sitting behind an Nginx web server.

  restart: always
  build: ./web
    - "8000"
    - /usr/src/app/static
  env_file: .env
    - db
  command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app

  restart: always
  build: ./nginx/
    - "80:80"
    - /www/static
    - web
    - web:web

  restart: always
  build: ./mongo/
    - "27017:27017"
    - /home/matt/postcode_search/mongo:/data/db

This Compose file is largely self explanatory as you can see the three container configurations (web, db, nginx) are defined separately.

Some of the entries in the example file above will be familiar to Docker users, such as mapping a volume or forwarding ports to the host. The Compose file is best used for setting some of these parameters which may have previously been configured when starting a single container with the ‘docker run’ command. This is because a single command is used to start all the containers within a Compose cluster, rather than starting them individually.

In order to allow communication between containers a ‘links’ entry is used. You can see that the ‘web’ container will be linked to the ‘db’ container. The ‘links’ entry is used in the same way as the ‘–link’ option for ‘docker run’. When the Compose cluster is started the links entries are used to determine the startup order of the containers.

Another unfamiliar entry may be ‘volumes_from’. As you may have guessed this simply mounts all the volumes from another container. In this case the Nginx container needs visibility of the static files from the Python application.

So to bring up the application we simple use the ‘docker compose up’ command. With this single command we can build (if required) and start our three containers. Easy!



Analysing Placenames in QGIS

We meet many different people trying to do different things here at thinkWhere and one of our attendees at our QGIS Skills of Analysis & Statistics was particularly interested in looking at place names. Place names are a really interesting way of telling what kind of history a place has had. The Island of Jura is a great example of this deriving from an old Norse word meaning Animal Island. Common Scottish names such as Tarbert and Ben Mor are Gaelic derived place names translating as a narrow strip of land or a big hill respectively.

While we often wonder about the originality of our forbears in how they choose to name things, these are the footprints of history in maps we use every day. They teach us about where we come from and how the peoples who come before us lived with their landscape. It’s very romantic and also a terrific thing for some GIS analysis.

QGIS has some nice ways of helping with this kind of thing, alongside OpenStreetMap which is a rich source of place name data. The Ordnance Survey Open Names dataset would also work well for this kind of analysis.

Duns & Dums

A common word found in Scottish place names is “Dun” meaning a hill fort or roundhouse in the Celtic tradition. This is sometimes rendered as “Dun” (as in Dundee or Dunfermline) or as “Dum” (As in Dumfries or Dumbarton). Less common in Scotland, although still prevalent in Ireland is the prefix “Don” (As in Donegal), however this gets confused in Scotland due to the River Don. Don in Scots Gaelic is more associated geographically with the Celtic river goddess.

In QGIS we can use the expression:

“name” ILIKE ‘%dun%’ OR “name” ILIKE ‘%dum%’

To select or filter for any words that contain those strings of letters.


The ILIKE function (a case-insensitive LIKE) is a great way of doing these kind of analyses and you can use the “%” wildcard to make selections in almost any context.

With a spot of good symbology and the use of the Print Composer to show multiple maps on the same page. You can show the distribution of a few different place name components. Having consistent maps next to each other on a page like this helps to show distribution.


A really effective way of showing this kind of data can be to use the heatmap. The last few versions of QGIS have put this into the standard symbology tool which allows you to generate them on the fly without the need to create a whole new raster. This makes it much easier. This is what the distribution of “dun”/”dum” components looks like.


The big problem with this kind of analysis is reliance on the data. The OpenStreetMap relies on users adding in data themselves and that makes an analysis of this nature a bit shaky with crowd sourced data.

The OS Open Names dataset would be a great way of doing that, but it is a huge dataset that would need a database set up for it. We do cover database creation in our advanced QGIS course, but it would be a good follow up.

Check out our QGIS training offerings at:

Apprenticeship at thinkWhere

My name is Jack, and I’ve been an apprentice at thinkWhere for 8 months. During my time here I’ve got to be part of the whole agile experience, as thinkWhere recently transitioned over from the traditional waterfall development. At first, it was overwhelming with terminology, scrum, sprint, kanban, story etc.


But over time I came to realise I’m not playing rugby, and these terms actually make a difference in development. The experience so far has been invaluable, I’ve discovered the monotony fixing syntax errors and satisfaction of something working. If there’s anything I have learned from working here, it’s that just because there is a goalie, doesn’t mean you can’t score, in other words there is always solution, no matter the problem.

The teams I have worked with have always been helpful towards my development, gradually building up my confidence to work with bigger and better projects. As my time comes to a close, my experience at thinkWhere has been a necessary stepping stone in moving out into the world. I think I’ll remember my experience here, it’s been one big push into the working world, and I’ve gained a lot of new skills from my time being here.


Directory Storage Checker:

This was for the Support team (who have to keep customer storage statistics in check), in order to check a client’s current storage usage. I used PHP to write it all, developing it on one of our test servers and using PhpMyAdmin to print database results and to produce a history list.


Automated Reporting:

One the later projects that I have worked on. The idea is to create a more sophisticated and automated approach to displaying customer data, in this case it was to create a portal which displayed all the necessary information about a client’s usage statistics.

Still a work in progress, but it will contain necessary information such as how much memory they are using, how much support they are entitled to via the phone etc. This not only speeds up our work efficiency, as there is less processing that needs to be done in the monthly reports, but also allows the customers more visibility of what they are using.

I built this system using PHP, JavaScript, google docs’ charts and PhpMyAdmin. I have enjoyed working on this project because it has a diverse language range that can be used, meaning there is a lot more flexibility in creativity to how the information is displayed.


Server Storage Monitor:

The most well received project was one I did on the storage reports. This application will run a batch file which reports the storage available in each server. This is incredibly useful as there are many different drives across different servers that need to be regularly checked. These are checked in real-time to ensure there is accuracy before making any appropriate decisions. This application will daily save the current storage into a CSV in case they need to be reviewed at any point. There is also a summary which produces an overlay summarising any discrepancies in storage.

This application was built on php and batch scripts. This project required a lot of research in order to gain a better understanding of how to use scripts in conjunction with php.