Favourite talks from JSConf Iceland

A little time off from tech
A little time off from tech

I was very fortunate to visit Reykjavik in August this year for JSConf Iceland.  Apart from relaxing trips to natural hot spas, battling vikings on rooftop bars backed by the setting sun, dramatic geysirs, gigantic waterfalls and snowmobiling on glaciers, there was also the best selection of technical talks I’ve seen at any conference.

Here are a few of my favourites.

A cartoon guide to performance in React

Lin Clark’s talk was a very easy to follow explanation of React’s core algorithm (as it stands currently).

One quick takeaway was to avoid an anti-pattern I’ve seen (or possibly implemented!) where you add a `key` to a collection of elements to avoid a React console warning. If you simply add an incrementing integer, say from the index of a call like

props.collection.map((e, i) => <li key={i}>e</li>)

then React won’t be able to easily tell that a collection has been sorted. Instead, use some form of business key (for example, a product code) that is specific to the object in the collection.

Reactive animations

Spinning up an electron app

As someone primarily working in frontend web development for an e-commerce company I don’t have a big set of use cases for setting up a desktop application. However, this talk was great fun and it was nice to see how easy it is to make an app with integration into system menus using Javascript.

I’ve heard BB-8 sales are through the roof!

Arbitrary computation on the GPU using WebGL

This was another talk where it’s difficult to imagine what the use cases could be for doing that volume of processing on the client, whilst locking up the GPU. It was still facinating to see how the GPU could be used to ‘render’ to an off screen canvas, then collect the data back by reading the image’s colour pixel by pixel.

This will Flow your mind

Tryggvi Gylfason’s talk was a very succint introduction to the Flow syntax and some of the benefits it has for larger teams.

If typed or functional languages that transpile to JavaScript are of interest I’d also recommend checking out recent JavaScript Air talks on Typescript & Flow and typed functional programming.

Progressive rendering – how to make your app render sooner

This talk demonstrating old school techniques for piping html to the browser before you have the whole page, but with NodeJS and streams.

Pivoting to React, at scale

How Pinterest have migrated from their Backbone/Django architecture to React. Probably some lessons here for anyone doing a large migration between frameworks, but particularly anything related to React.

The ones I missed…

With two tracks of talks there’s always going to be one or two you’d still love to see. These are on my playlist:

Dirty little front end tricks

A colleague went to this one and said it was his favourite of the conference.

The New Mobile Web: Service Worker, Push and App Manifests

Service workers are that feature I’d love to use but are difficult to justify the time on while certain popular mobile operating systems are holding back the web. I’ve heard there are lots of interesting ways to use service workers as progressive enhancement and this talk goes into using them to produce a native-like experience.

Getting Started with Graylog on MacOS

Graylog is an amazing open source tool for recording, viewing and analysing application logs. The performance and full text indexing of the underlying elasticsearch database means you probably won’t need any other logging tools throughout your whole organisation.

Application developers writing to Graylog may need a local instance to retrieve diagnostic information during development. This post gives some quick instructions for setting up a local logging server using docker.

Docker is used for managing lightweight operating system containers with isolated process and network namespaces – each container can have a process of PID 10 using TCP port 80. Unfortunately, Mac OS X’s kernel doesn’t contain the necessary APIs to run a Linux-based container and so we need to use a Virtual Machine running Linux itself. Boot2docker, installed via Homebrew, comes to our rescue to make the whole process as simple as possible:

brew install Caskroom/cask/virtualbox
brew install Caskroom/cask/boot2docker
brew install docker

Now all the tools are installed we need to download and start up the Linux VM:

boot2docker download
boot2docker init
boot2docker start

Another complexity of the docker host being in a virtual machine is that the docker command line interface needs to connect across a virtual network interface. Boot2docker helps us here by telling us the required settings. I put the ‘export’ lines in my ~/.profile so that the docker command will work without any special setup in the future.

boot2docker shellinit
# outputs:
Writing /Users/djb/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/djb/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/djb/.boot2docker/certs/boot2docker-vm/key.pem
    export DOCKER_HOST=tcp://
    export DOCKER_CERT_PATH=/Users/djb/.boot2docker/certs/boot2docker-vm
    export DOCKER_TLS_VERIFY=1

To test your new docker installation you can run the hello-world container:

docker run hello-world

Assuming that went okay we can now get started with Graylog! Run the following command to download the image for containers we’ll create. This may take some time as there are a few hundred MB of Linux distribution and Java libraries to fetch.

docker pull graylog2/allinone

You can now create a new container. The numeric arguments are the TCP ports that will be exposed by the container.

docker run -t -p 9000:9000 -p 12201:12201 graylog2/allinone

Once the flurry of activity in your console comes to a stop the container is fully up and running. You can visit the admin page by visiting http://ip-of-server:9000. The IP address is that of your boot2docker virtual machine, which you found earlier using boot2docker shellinit. The default username and password to the web interface are admin and admin.

Once you’re inside the web interface Graylog will warn you that there are no inputs running. To fix this, browse to http://ip-of-server:9000/system/inputs and add an input of type GELF HTTP running on port 12201. The name of the input isn’t important unless you have plans for hundreds of them so I’ve unimaginatively called mine GELF HTTP.

Now we’ve got a running server configured enough for testing, but no logs! A real application will produce logs of its own but for the purpose of demonstration we can write a script to read in your local computer’s system log and send the messages to Graylog for indexing. Note that you’d never do this in production as Graylog is perfectly capable of using the syslog UDP protocol, which would avoid the need to write any code.

#!/usr/bin/env python
import urllib2
import json
path = "/var/log/system.log"
with open(path) as log:
    log = f.readlines()
for line in f:
    message = {}
    message['version'] = '1.1'
    message['host'] = 'localhost'
    message['short_message'] = line
    req = urllib2.Request('')
    req.add_header('Content-Type', 'application/json')
    response = urllib2.urlopen(req, json.dumps(message))

Once your logs are in the server you can start searching them. Try querying for kernel to find all the messages logged from the Mac OS X kernel. Having a full text index of our logs is useful but belies the promise of elasticsearch’s structured storage. A more useful implementation would log application-specific information as fields in the GELF message. For example, with the following code a single click a complete history of logs for a particular client could be retrieved:

message['client_ip'] = ipFromApplicationVariable

First steps with Azure CLI

This guide uses Azure’s cross platform CLI to create and destroy a virtual machine.

First things first, lets define some basic properties about the machine such as the adminstrator credentials and its hostname, which will be given the suffix of cloudapp.net:


To make changes to your Azure account you’ll need to authenticate yourself using an X509 certificate. This can be generated by running the following command, which will launch your web browser:

azure account download

Once the certificate is downloaded you need to register it with the command line tool:

azure account import "~/Downloads/credentials.publishsettings"

Azure uses images (or templates) of operating systems with varying applications preinstalled. To create a new VM you need to pick an image, but these are identified by mixtures of random hexidecimal and textual description. To find the right identifier you can use another API call to search images by their human readable label. The following command downloads the list as json and uses the jq command line tool to find the right json object and return its identifier:

SELECTOR='.[] | select(.label | contains("Windows Server 2012 R2 Datacenter")) | .name'
IMAGE=`azure vm image list --json | jq -r -c $SELECTOR | tail -n 1`

You can now create and start your new server:

azure vm create $HOST $IMAGE $USERNAME $PASSWORD \
    --rdp \
    --location "West Europe" \
    --vm-size Medium
azure vm start $HOST

By default there won’t be any endpoints created on the Azure load balancer, save for 3389 used by RDP. To access a web server running on your new server use vm endpoint:

azure vm endpoint create $HOST 80

When you’re finished with the VM you’ll want to delete it to save a few pennies. Use the vm delete command and optionally also remove the VM’s disk storage:

azure vm delete $HOST --blob-delete --quiet

Switching Hibernate’s UUID Type Mapping per Database

JDBC doesn’t include UUIDs as one of its generic SQL types and so the different relational databases supported by Hibernate have different mechanisms to save and retrieve UUIDs. PostgreSQL’s JDBC driver, for example, requires that you use Types.OTHER as a surrogate for the missing UUID constant:

preparedStatement.setObject( index, uuid, Types.OTHER );

Whereas H2 requires the simpler:

preparedStatement.setObject( index, uuid );

Developers may also choose not to use a database’s native mechanism, preferring instead to stipulate storage using a 36 character string or as an ordinary binary number. The way of telling Hibernate which mechanism you wish to use is done using an @Type annotation on the column:

private UUID id;

Or an @TypeDef annotation on the package object:

        name = "pg-uuid",
        defaultForType = UUID.class,
        typeClass = PostgresUUIDType.class
package com.example.domain;

This works fine with one database, but causes problems when you wish to test your application with a different database. The solution is to create another Hibernate type definition which can delegate to a database specific type depending on some environmental setting.

The following implementation loads the database dialect in use from a properties file on the classpath. Strictly speaking, this information is already available in the persistence.xml file, but extracting that would entail much more effort. This is done just once when the class is loaded, so shouldn’t result in a performance impact.

 * @author David Beaumont
 * @see org.hibernate.type.PostgresUUIDType
public class UUIDCustomType extends AbstractSingleColumnStandardBasicType {
    private static final long serialVersionUID = 902830399800029445L;
    private static final SqlTypeDescriptor SQL_DESCRIPTOR;
    private static final JavaTypeDescriptor TYPE_DESCRIPTOR;
    static {
        Properties properties = new Properties();
        try {
            ClassLoader loader = Thread.currentThread().getContextClassLoader();
        } catch (IOException e) {
            throw new RuntimeException("Could not load properties!", e);
        String dialect = properties.getProperty("dialect");
        if(dialect.equals("org.hibernate.dialect.PostgreSQLDialect")) {
            SQL_DESCRIPTOR = PostgresUUIDType.PostgresUUIDSqlTypeDescriptor.INSTANCE;
        } else if(dialect.equals("org.hibernate.dialect.H2Dialect")) {
            SQL_DESCRIPTOR = VarcharTypeDescriptor.INSTANCE;
        } else {
            throw new UnsupportedOperationException("Unsupported database!");
    public UUIDCustomType() {
    public String getName() {
        return "uuid-custom";

Now you just have to register this new type as detailed above using @Type("uuid-custom").

Copying Mail using Imapsync

Mail can be copied between IMAP servers using a graphical client such as Outlook or Thunderbird. However, these tend to be quite unreliable, often resulting in duplicates or missing messages.

An alternative is to use the imapsync script. For example, the following command copies all your mail from Fred’s INBOX folder at example.com to the Old-Mail folder at example.net:

imapsync --host1 mail.example.com --user1 fred --ssl1 --password1 secret1 \
         --host2 mail.example.net --user2 fred --ssl2 --password2 secret2 \
         --folder INBOX --prefix1 INBOX --prefix2 Old-Mail --dry

The –dry option prevents imapsync actually making any changes. Once you are sure you have all the options right, you can remove it to perform the sync.

Virtualbox Guest Additions in Fedora 16

VirtualBox compiles kernel modules against the running kernel. This means you should update your system before you start as otherwise the Guest Additions will stop working when you upgrade in the future.

yum clean all
yum update

I recommend doing this in a virtual terminal of the guest machine (press CTRL-ALT-F2) as in graphical mode VirtualBox seems to have a habit of making the kernel of the host OS panic. Once you are updated, make sure to reboot so that you are actually running the latest kernel:


Now install the headers and compilers needed by VirtualBox Guest Additions:

yum install kernel-devel kernel-headers gcc

And install the guest additions by choosing ‘Install Guest Additions…’ from VirtualBox’s menus. Reboot again and your systems should come up with the guest additions working.

Adding a MySQL datasource to JBoss AS 7

This post assumes you are using a Unix like system with a bash shell and have already downloaded and extracted the JBoss AS 7 application server.

Retrieve the MySQL Connector

The connector can be downloaded from: http://dev.mysql.com/downloads/connector/j/. The current version at the time of writing is 5.1.17, further sections of this guide assume that version number.

Unzip the connector to your downloads folder and save the location of the main JAR in a variable which we can refer to later:

cd ~/Downloads
unzip mysql-connector-java-5.1.17.zip
export MYSQL_JAR=~/Downloads/mysql-connector-5.1.17/mysql-connector-java-5.1.17-bin.jar

Add a Module to AS 7

AS 7 uses a module system to provide isolation in class loading. We need to create a new module which contains the MySQL Connector J JAR. Move to the the AS installation directory and create the folder structure for the new module:

export JBOSS_HOME=~/Development/jboss-as-web-7.0.0.Final
mkdir -p modules/com/mysql/main

Copy the driver jar to the new directory and move to that directory:

cp $MYSQL_JAR $JBOSS_HOME/modules/com/mysql/main
cd $JBOSS_HOME/modules/com/mysql/main

Define the module in XML. This is the key part of the process:

vi module.xml

If the version of the jar has changed, remember to update it here:

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.0" name="com.mysql">
    <resource-root path="mysql-connector-java-5.1.17-bin.jar"/>
    <module name="javax.api"/>

The new module directory should now have the following contents:


Create a Driver Reference

Now the module has been created, we need to make a reference to it from the main application server configuration file:

cd $JBOSS_HOME/standalone/configuration
vi standalone.xml

Find the ‘drivers’ element and add a new driver to it:

    <driver name="mysql" module="com.mysql"/>
    <driver name="h2" module="com.h2database.h2">

The ‘h2’ driver is part of the default JBoss configuration. The new driver that needs adding is named ‘mysql’.

Add the Datasource

Go into the configuration directory and open the main configuration file:

cd $JBOSS_HOME/standalone/configuration
vi standalone.xml

Find the datasources element and add a new datasource inside:

        jndi-name="java:/mydb" pool-name="my_pool"
        enabled="true" jta="true"
        use-java-context="true" use-ccm="true">

Start the Application Server

Now try running the application server. Make sure you run initially as a user which has write access to the modules directory where you placed the MySQL connector JAR. This is because the appication server seems to generate an index of the directories inside the JAR.


Hopefully, amongst the application server’s console output you should see the following:

20:31:33,843 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) Bound data source [java:/mydb]