Dockerfile templating to automate image creation
Dockerfile templating to automate image creation
By Luud Janssen
5 min read
If we take a look at the repository for the official Node.js Docker images we can see that the source contains a Dockerfile for each image variant. Imagine having to add a package to all of these images. That's a lot of manual edits, but it's manageable. Now imagine adding a package to only specific versions of the image and a different package depending on the Linux distro.
- Authors
- Name
- Luud Janssen
- linkedinLuud Janssen
A lot of open source projects distribute multiple versions of their Docker images on Docker Hub. For example, Node.js has a set of officially supported images for each Node.js use case. For starters, they have a Docker image for each patch version of the runtime, but even those are split up into multiple images based on different distributions of Linux. I take Node.js as an example here, but this is a very common pattern across official Docker images.
If we take a look at the repository for the official Node.js Docker images we can see that the source contains a Dockerfile for each image variant. And that's a problem because maintenance on these images gets complicated quickly. Imagine having to add a package to all of these images. That's a lot of manual edits, but it's manageable. Now imagine adding a package to only specific versions of the image and a different package depending on the Linux distro. It's doable, but it's not fault-tolerant and you can quickly lose track of all the different rules.
This might not be a problem for the Node.js Docker images team. But at iO, we have built images for our Magento projects that do suffer from these problems. These build images are used by our continuous integration system and have all the dependencies necessary to create a production-ready Magento instance, which includes building the front-end, installing Composer dependencies and more. This requires the images to contain PHP, Node.js and a lot of additional Linux packages. We also need these images to be available for different Node.js and PHP versions, and the "additional Linux packages" differ based on these versions. You can see why simply having a lot of Dockerfiles would result in quickly getting lost in the complexity of all of these rules and would be difficult to maintain.
That's why we decided to take a different approach and create Dockerfile templates that would be populated with each language's and framework's version, which results in a set of buildable Dockerfiles. We also automated this process with the use of Jenkins, which would recompile these templates on a daily basis, build the resulting Dockerfiles and upload them to our Docker image registry. We build these daily to ensure we always have an image ready with the latest versions. In case a vulnerability is found in a specific Node.js or PHP version, our teams can quickly update the image to the latest version.
All of this produces a clean, maintainable project resulting in Docker images which automatically update for use in our various e-commerce teams. In this article, I'll explain our approach and show you how you can set up such a project.
Assumed knowledge
This article requires you to have basic knowledge of:
- Docker containers and their use cases
- Continuous integration tooling (such as Jenkins)
- Node.js
The concept
The generation process consists of three steps:
- Compiling Dockerfile templates using values defined in a config file
- Generating an output file that lists all generated Dockerfiles
- Building and publishing all the generated Dockerfiles using Jenkins
I'll first talk about compiling the dockerfiles and generating the output file in Node.js and then we'll make a switch to building the images with Jenkins.
Compiling Dockerfiles
This concept translates to any templating language, but we chose Nunjucks as our templating library because it has a rich syntax which can handle most use cases. We'll start by setting up a new Node.js project with the required dependencies:
mkdir docker-templates
cd docker-templates
npm init
npm install nunjucks fs-extra rimraf
We're installing fs-extra
and rimraf
to write and clean the output files and make our lives slightly easier.
Note: In this tutorial, we're going to use the new Node.js ECMAScript module syntax. Be sure to use a Node.js version that supports this if you follow along. Also, don't forget to add the "type": "module"
declaration to your package.json
file.
Let's start by creating the Dockerfile template:
FROM php:{{ php }}-cli
{# Install base dependencies -#}
RUN apt-get update \
{#- Skip recommended installs to reduce image size #}
&& apt-get install -y --no-install-recommends \
git \
ssh \
{#- Install packages specific to a PHP version #}
{%- if php === "7.3" %}
libbz2-dev \
{%- endif %}
{#- Add a line that'll always be executed to end correctly after adding the escape (\) character on each line #}
&& echo "Finished installing packages"
{# Get Composer #}
COPY /usr/bin/composer /usr/bin/composer
{#- Install Node.js #}
RUN curl -sL https://deb.nodesource.com/setup_{{ node }}.x | bash - \
&& apt-get install -y --no-install-recommends nodejs \
{#- Clean APT cache #}
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
This is just a dummy template, but it's an example to show that we can create multiple variants of the same Dockerfile using a basic template syntax.
Now, let's add a configuration file that dictates which versions of this template to compile:
{
"php": ["7.3", "7.4"],
"node": ["10", "12", "14"]
}
We want to create a script that takes the different combinations of PHP versions and Node.js versions and outputs the different Dockerfiles, so let's first generate a function that can output the combinations of these versions:
import fse from 'fs-extra'
/**
* Returns all PHP and Node.js versions in the given config file
*/
async function getVersions(file) {
const config = await fse.readJson(file)
const versions = []
for (const phpVersion of config.php) {
for (const nodeVersion of config.node) {
versions.push({
php: phpVersion,
node: nodeVersion,
})
}
}
return versions
}
Next, we want to create functions that can turn a Nunjucks template into a file specific for the version combinations of PHP and Node.js:
import { promises as fs } from 'fs'
import nunjucks from 'nunjucks'
import { dirname } from 'path'
/**
* Compiles a Nunjucks template given a filename
*
* @param file The filename of the template to compile
* @return A Nunjucks template which can be rendered later
*/
async function compileTemplate(file) {
const templateContents = await fs.readFile(file, 'utf-8')
return nunjucks.compile(templateContents)
}
/**
* Promisified version of the `nunjucks.render` method to render a precompiled template
* given a certain context.
*
* @param template The prepared template from the prepare method
* @param context The context with which to render the template
*/
async function render(template, context) {
return new Promise((resolve, reject) => {
nunjucks.render(template, context, function (error, result) {
if (error) {
reject(error)
}
resolve(result)
})
})
}
/**
* Writes a Dockerfile given a template and template context.
*
* @param template A compiled Nunjucks template
* @param file The file path to write to
* @param versions The version combination as context for the template, e.g. { php: "7.2", node: "10" }
* @return An object containing all the versions as well as the file that was created
*/
async function createDockerfile(template, file, versions) {
const dockerfile = await render(template, versions)
const directory = dirname(file)
await fs.mkdir(directory, { recursive: true })
await fs.writeFile(file, dockerfile, 'utf-8')
return {
...versions,
file,
}
}
/**
* Creates Dockerfiles using a template and a set of version combinations
*
* @param template A compiled Nunjucks template
* @param versions The version combination as context for the template, e.g. { php: "7.2", node: "10" }
*/
async function createDockerfiles(template, versions) {
return Promise.all(
versions.map((version) =>
createDockerfile(
template,
`${outputDirectory}/php-${version.php}/node-${version.node}.Dockerfile`,
version
)
)
)
}
The compileTemplate
and render
function are used to turn the Nunjucks templates into files and the createDockerfile
and createDockerfiles
functions take a (set of) combination(s) of versions and render them using the previous methods.
Next, we'll put it all together in a bootstrap
method:
import { default as rimrafCallback } from 'rimraf'
import { promisify } from 'util'
import { promises as fs } from 'fs'
const configFile = 'config.json'
const templateFile = 'templates/base.Dockerfile.njk'
const outputDirectory = 'output'
const rimraf = promisify(rimrafCallback)
async function bootstrap() {
// Delete the output folder before building
await rimraf(outputDirectory)
// Get the version combinations
const versions = await getVersions(configFile)
// Compile the Dockerfile template
const template = await compileTemplate(templateFile)
// Create the ouptut directory
await fs.mkdir(outputDirectory)
// Create the Dockerfiles and get the corresponding information
const files = await createDockerfiles(template, versions)
// Write the infromation to an output.json file
await fs.writeFile(`${outputDirectory}/output.json`, JSON.stringify(files, null, 2))
}
bootstrap().then(() => console.log('Dockerfiles generated'))
This gives you a nice overview of all the steps in this process:
- Clean up any previous builds
- Get all versions to compile
- Compile the template
- Write compiled Dockerfiles to an output directory
- Write an
output.json
file which contains all the created Dockerfiles so we can build them later
Building the Dockerfiles
We created the Dockerfiles, but we still need to build them. We do this using Jenkins and its Docker pipeline tools.
Hint: If you have a Continuous Integration pipeline that allows directly executing the "docker" command, you could build the Dockerfiles directly in the Node.js application by spawning child processes.
Let's create a basic Jenkinsfile first:
pipeline {
agent any
options {
// We perform our own checkout steps
skipDefaultCheckout(true)
// Prevent builds from running concurrently
disableConcurrentBuilds()
// Support ANSI colors
ansiColor('xterm')
}
triggers {
// We want to rebuild the images at midnight
cron('@midnight')
}
stages {
stage('Checkout') {
steps {
script {
deleteDir()
checkout(scm)
}
}
}
}
}
We'll create a function that takes an entry in the output.json
file as input and builds the Dockerfile using Docker.
Note that we tag the images with "latest" and the current datetime which allows our developers to pin the image version to a specific date and only update when they intend to. The use of the "latest" tag can be unstable since the daily rebuild of the images will always install the latest (patch) version of all the frameworks, languages and packages involved.
// Define the build function that returns a Docker build task
def build(def version) {
def now = new Date()
def datetime = now.format("yyyy-MM-dd'T'HH-mm-ss")
// Tag with the date and "latest"
def tags = [
datetime,
"latest"
]
// Pull the latest version of the image before building to ensure we're using Docker layer caching
def image = docker.build(version.image, "--pull -f ./${ version.file } .")
// Push all tags
tags.each { tag ->
image.push(tag)
}
}
What's left is to add a stage that runs the Node.js script in a Node.js container and another step that reads the output file and executes the build functions:
stage('Create Dockerfiles') {
steps {
script {
// Run the NPM script in a Node.js Docker container
docker
.image("node:14")
.inside() {
sh('npm ci')
sh('npm start')
}
}
}
}
stage('Build') {
steps {
script {
// For more info about registry set-up in Jenkins: https://www.jenkins.io/doc/book/pipeline/docker/#custom-registry
docker.withRegistry("", "") {
// Read the needed image versions from the generated output.json file
def versions = readJSON(file: 'output/output.json')
versions.each { version ->
build(version)
}
}
}
}
}
And that's it! If we create a new pipeline in Jenkins that references this Jenkinsfile, Jenkins will run the Node.js script every night, create the Dockerfiles and push them to our registry.
Upcoming events
The Test Automation Meetup
PLEASE RSVP SO THAT WE KNOW HOW MUCH FOOD WE WILL NEED Test automation is a cornerstone of effective software development. It's about creating robust, predictable test suites that enhance quality and reliability. By diving into automation, you're architecting systems that ensure consistency and catch issues early. This expertise not only improves the development process but also broadens your skillset, making you a more versatile team member. Whether you're a developer looking to enhance your testing skills or a QA professional aiming to dive deeper into automation, RSVP for an evening of learning, delicious food, and the fusion of coding and quality assurance! 🚀🚀 18:00 – 🚪 Doors open to the public 18:15 – 🍕 Let’s eat 19:00 – 📢 First round of Talks 19:45 – 🍹 Small break 20:00 – 📢 Second round of Talks 20:45 – 🍻 Drinks 21:00 – 🙋♀️ See you next time? First Round of Talks: The Power of Cross-browser Component Testing - Clarke Verdel, SR. Front-end Developer at iO How can you use Component Testing to ensure consistency cross-browser? Second Round of Talks: Omg who wrote this **** code!? - Erwin Heitzman, SR. Test Automation Engineer at Rabobank How can tests help you and your team? Beyond the Unit Test - Christian Würthner, SR. Android Developer at iO How can you do advanced automated testing for, for instance, biometrics? RSVP now to secure your spot, and let's explore the fascinating world of test automation together!
| Coven of Wisdom - Amsterdam
Go to page for The Test Automation MeetupCoven of Wisdom - Herentals - Winter `24 edition
Worstelen jij en je team met automated testing en performance? Kom naar onze meetup waar ervaren sprekers hun inzichten en ervaringen delen over het bouwen van robuuste en efficiënte applicaties. Schrijf je in voor een avond vol kennis, heerlijk eten en een mix van creativiteit en technologie! 🚀 18:00 – 🚪 Deuren open 18:15 – 🍕 Food & drinks 19:00 – 📢 Talk 1 20:00 – 🍹 Kleine pauze 20:15 – 📢 Talk 2 21:00 – 🙋♀️ Drinks 22:00 – 🍻 Tot de volgende keer? Tijdens deze meetup gaan we dieper in op automated testing en performance. Onze sprekers delen heel wat praktische inzichten en ervaringen. Ze vertellen je hoe je effectieve geautomatiseerde tests kunt schrijven en onderhouden, en hoe je de prestaties van je applicatie kunt optimaliseren. Houd onze updates in de gaten voor meer informatie over de sprekers en hun specifieke onderwerpen. Over iO Wij zijn iO: een groeiend team van experts die end-to-end-diensten aanbieden voor communicatie en digitale transformatie. We denken groot en werken lokaal. Aan strategie, creatie, content, marketing en technologie. In nauwe samenwerking met onze klanten om hun merken te versterken, hun digitale systemen te verbeteren en hun toekomstbestendige groei veilig te stellen. We helpen klanten niet alleen hun zakelijke doelen te bereiken. Samen verkennen en benutten we de eindeloze mogelijkheden die markten in constante verandering bieden. De springplank voor die visie is talent. Onze campus is onze broedplaats voor innovatie, die een omgeving creëert die talent de ruimte en stimulans geeft die het nodig heeft om te ontkiemen, te ontwikkelen en te floreren. Want werken aan de infinite opportunities van morgen, dat doen we vandaag.
| Coven of Wisdom Herentals
Go to page for Coven of Wisdom - Herentals - Winter `24 editionMastering Event-Driven Design
PLEASE RSVP SO THAT WE KNOW HOW MUCH FOOD WE WILL NEED Are you and your team struggling with event-driven microservices? Join us for a meetup with Mehmet Akif Tütüncü, a senior software engineer, who has given multiple great talks so far and Allard Buijze founder of CTO and founder of AxonIQ, who built the fundaments of the Axon Framework. RSVP for an evening of learning, delicious food, and the fusion of creativity and tech! 🚀 18:00 – 🚪 Doors open to the public 18:15 – 🍕 Let’s eat 19:00 – 📢 Getting Your Axe On Event Sourcing with Axon Framework 20:00 – 🍹 Small break 20:15 – 📢 Event-Driven Microservices - Beyond the Fairy Tale 21:00 – 🙋♀️ drinks 22:00 – 🍻 See you next time? Details: Getting Your Axe On - Event Sourcing with Axon Framework In this presentation, we will explore the basics of event-driven architecture using Axon Framework. We'll start by explaining key concepts such as Event Sourcing and Command Query Responsibility Segregation (CQRS), and how they can improve the scalability and maintainability of modern applications. You will learn what Axon Framework is, how it simplifies implementing these patterns, and see hands-on examples of setting up a project with Axon Framework and Spring Boot. Whether you are new to these concepts or looking to understand them more, this session will provide practical insights and tools to help you build resilient and efficient applications. Event-Driven Microservices - Beyond the Fairy Tale Our applications need to be faster, better, bigger, smarter, and more enjoyable to meet our demanding end-users needs. In recent years, the way we build, run, and operate our software has changed significantly. We use scalable platforms to deploy and manage our applications. Instead of big monolithic deployment applications, we now deploy small, functionally consistent components as microservices. Problem. Solved. Right? Unfortunately, for most of us, microservices, and especially their event-driven variants, do not deliver on the beautiful, fairy-tale-like promises that surround them.In this session, Allard will share a different take on microservices. We will see that not much has changed in how we build software, which is why so many “microservices projects” fail nowadays. What lessons can we learn from concepts like DDD, CQRS, and Event Sourcing to help manage the complexity of our systems? He will also show how message-driven communication allows us to focus on finding the boundaries of functionally cohesive components, which we can evolve into microservices should the need arise.
| Coven of Wisdom - Utrecht
Go to page for Mastering Event-Driven Design