Using Make To Manage Docker Image Creation

Like source code, Docker images are required to be built, tested and deployed before they can become containers.

While Docker doesn’t have a build framework, you can take advantage of Make to automate the build process across different environments. By using Make you can have a consistent and shared approach to managing your Docker images without the overhead of using task managers such as gulp.

To execute commands you need a Makefile. The Makefile contains a list of targets that define the commands and arguments required to be executed in order for a particular task to be performed, such as building a Docker image.

The contents of a Makefile might look like this:

build:
    docker build -t benhall/docker-make-example .

With this in your project’s root directory, executing the command `make build` will now build the container image.

A Makefile can define multiple targets reflecting different actions. The template below demonstrates a useful Makefile template covering the common scenario’s for managing Docker images.

NAME = benhall/docker-make-demo

default: build

build:
    docker build -t $(NAME) .

push:
    docker push $(NAME)

debug:
    docker run --rm -it $(NAME) /bin/bash

run:
    docker run --rm $(NAME)

release: build push

Learn Docker and Makefiles via Scrapbook, an Interactive Learning Environment.

Docker and Makefiles

Certain Cloudflare IP Addresses Blocked By Sky Broadband’s Adult Block List?

An interesting Hacker News post this morning mentioning that certain Cloudflare IP addresses might be on the Sky Broadband blocklist.  As a user of Cloudflare this is extremely concerning as end users will start to see random behaviour of my websites.

As more ISPs start to block wide reaching services without consideration for other websites then this will only happen more frequently.

https://news.ycombinator.com/item?id=9020646

Tip: Empty Cache and Hard Reload in Chrome

Working with JavaScript and CSS daily is a joy, until it’s not and then it’s a nightmare. Browser caching is just one of the aspects that can make working with these technologies a little more difficult.

To make life easier, with the Dev Tools open in Chrome, click and hold the Reload menu item. A new dropdown will appear allowing you to Empty Cache and Hard Reload the page.
Screenshot 2015-02-05 12.42.34

Increasing line height in your terminal and IDE

Just a small tip today. During my “What Designs Need To Know About Visual Design” conference presentations I discuss the importance of line height and how increasing it can improve readability on websites and applications. The same technique applies to your development environment and IDE. By increasing the vertical spacing you can make it much easier to read and as a result shouldn’t require as much effort meaning you’re slightly less drained at the end of the day.

Within iTerm you can set the line height in the text preferences.
Screenshot 2015-01-23 15.29.18

Enabling infinite scrollback in iTerm

 

The default profile within iTerm limits how many lines of output it caches and allows you to scroll back.  When debugging a large amount or a long running terminal session this can become frustrating.

To enable unlimited scrollback simply go into the preferences, on the terminal tab you’ll find the “Unlimited scrollback” option. Tick and you’ll be able to see everything and not just the last 10000 lines in future.

 

iTerm Scrollback lines

Tunnelling to a Docker Container using ngrok

Ngrok offers the ability to “I want to securely expose a local web server to the internet and capture all traffic for detailed inspection and replay.”

While playing with RStudio, a R IDE available inside a browser, what I actually wanted was ngrok to “securely expose a local web server running inside a container to the internet”

Turns out it is very easy. Let’s assume we have RStudio running via b2d on 8787.

To proxy to a port on our local machine we’d use:

$ ngrok 8787

Sadly this will fail as our b2d container is not running on 127.0.0.1
The way around it is to define the boot2docker hostname/IP address

$ ngrok b2d:8787

You’ll get the output:

Forwarding http://47df0f.ngrok.com -> b2d:8787

All HTTP requests to the domain will now be forwarded to your container. Very nice!

For those wondering why I have a b2d hostname, I added it to my hosts file because typing is sometimes the bottleneck.

$ cat /private/etc/hosts
192.168.59.103 b2d

Cache Invalidation with Clouldflare, WordPress, Varnish and HTTP PURGE

cache-invalidation

While having a cache can help WordPress scale you encounter one of the hardest computer science problems of cache invalidation. When a new post is published then the homepage cache needs to be broken in order to refresh.

When using Varnish there is a really nice wordpress plugin called Varnish Http Purge. Under the covers when a new post or comment is published it issues a HTTP PURGE request to break the cache.

Unfortunately if you have cloudflare in front of your domain then it will attempt to process the PURGE request and fail with a 403. After all you don’t want the entire world being able to break your cache.

$ curl -XPURGE http://blog.benhall.me.uk
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>cloudflare-nginx</center>
</body>
</html>

My solution was to add a /etc/hosts entry for the domain on my local machine to point to the local IP address. When a HTTP request is issue to the domain from my web server then it skips cloudflare and goes straight to the Varnish instance, allowing the cache to be broken and solving the problem.

Making users feel special with an invite (or the Fabric invite email)

As a user, when signing up to a preview of a product you’ll likely receive a very generic thank you message, a mailchimp confirmation or nothing at all. When a company does something different it stands out and users generally notice.

To use crashlytics I needed to join the Fabric developer programme, a cross-platform mobile development suite from Twitter that includes a number of modules and tools to help with the application development lifecycle. Crashlytics is designed around crash reporting and alerts.

After joining the programme I received the standard email saying I’m on the list. Nothing to see here.

Twitter Fabric Invite

After 9 minutes a second email arrived. Enough time had passed that it could be personal and not automated, unlikely but I still like to believe.

Twitter Fabric Invite Email

A couple of items instantly stood out from the email.

1) Firstly the subject “Fabric access (need reply)”. 10 minutes ago I was told I was on the list, now I receive an email about my access but required a reply. It sparked my interest enough to open it.

2) The opening paragraph states the founder “pulled aside one of the devs to create a batch of one just for you.” – Instantly giving the user special treatment and making them feel important. I don’t believe this happened but there is still a positive feeling attached to the statement and the company as a whole. It’s a nice touch.

3) “Check your inbox shortly for the invite” – This keeps me engaged and the product at the front of my mind. It also starts to build the anticipation that I might be joining something special.

4) “Let me know once you receive the invite!” – A great way to engage with users and start the conversation. It doesn’t ask about first experiences or only get in touch if you need something both of which cause the user to think. It would be really interesting to see if this sparks conversations and what questions also are attached with the initial email.

A few moments later an invite code arrived and I signed up instantly. Sadly, I didn’t let Wayne know, sorry Wayne.

Scaling WordPress with Varnish and Docker

In my previous post I discussed how my blog is hosted. While it’s a great configuration, it is running on a small instance and the WordPress cache plugins only offer limited value. Andrew Martin showed me his blitz.io stats and it put mine to shame. Adding Varnish, an HTTP accelerator designed for content-heavy dynamic web sites to the stack was agreed.

My aim was to have a varnish instance running in-between Nginx container that does the routing for all incoming requests to the server and my WordPress container. With a carefully crafted Varnish configuration file I use the following to bring up the container:

docker run -d --name blog_benhall_varnish-2 
   --link blog_benhall-2:wordpress 
   -e VIRTUAL_HOST=blog.benhall.me.uk 
   -e VARNISH_BACKEND_PORT=80 
   -e VARNISH_BACKEND_HOST=wordpress 
   benhall/docker-varnish

The VIRTUAL_HOST environment variable is used for Nginx Proxy. The Docker link allowing Varnish and WordPress to communicate, my wordpress container is called blog_benhall-2. VARNISH_BACKEND_PORT defines the port WordPress runs on inside the container. VARNISH_BACKEND_HOST defines the internal hostname which we set while creating the docker link between containers.

When a request comes into the Varnish container it is either returned instantly or proxied to a different container and cached on the way back out.

Thanks to Nginx Proxy I didn’t have to change any configuration, as they simply reconfigured themselves as new containers were introduced. The setup really is a thing of beauty, that can now scale. I can use the same docker-varnish image to cache other containers in the future.

The Dockerfile and configuration can be found on Github.

The Docker image has been uploaded to my hub.

Making Cron jobs easier to configure with Special Words

Cron jobs are a very useful tool for scheduling commands however I find the Crontab (Cron Table) syntax nearly impossible to remember unless I’m working with it daily.

Most of my Cron jobs are fairly standard, for example backup a particular directory every hour. While configuring a new job I looked around to remember how to execute a command at a particular time every day. Most of the online editors I tried are more complex than the syntax itself. Thankfully I came across an interesting post from 2007 that mentioned Special Words. It turns out that you can use a set of keywords as well as numbers when defining a job:

@reboot Run once at startup
@yearly Run once a year
@monthly Run once a month
@weekly Run once a week
@daily Run once a day
@hourly Run once an hour

To run a command daily I can simply use:

@daily <some command>

But when is @daily? Using the run-parts command we can find out when each keyword will be executed, in this case 6.25am. A strange time but works for me!

$ grep run-parts /etc/crontab
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )