Nate Tovo bio photo

Nate Tovo

Nate is a Lead Developer and maintainer of the Conversant UI Angular component library. He also loves brewing beer.

Email

Developing in Docker containers is awesome - restarting them with every change, however, can be problematic. Thankfully, getting hot reloading working in both a client and server container is pretty simple.

In this article, I’m going to show you how to:

1) Setup two docker containers to run your client and server applications using the same multi-stage Dockerfile. 2) Volume mount your local files into your containers. 3) Get both of them running with a docker-compose file.

Finally, we’ll take it a step further by adding an additional shared module and watch both containers hot reload when making a change to the shared module.

The final code for this article can be found at github.com/csnate/hot-reload-demo

Prerequisites

At Conversant, we use Angular for all of our client applications and Node with TypeScript on our server applications. By using TypeScript for both the client and server applications, we’ll be able to easily share communication models across the two applications, a discussion worthy of a separate blog post!

To get hot reloading working, we’ll use the Angular CLI to serve our client application, and we’ll use tsc-watch to serve our server application. You’ll also need to have Docker, Node, and the Angular CLI installed globally.

Project initialization

In the root of the application, run the following commands, following the prompts to generate the package.json:

$ mkdir hot-reload-demo
$ cd hot-reload-demo
$ npm init
$ npm install tslib

And add the following scripts to the package.json (we will code these up later):

"scripts": {
    "start": "docker-compose up -d",
    "debug:server": "npm run debug -C server",
    "debug:client": "npm run debug -C client",
    "postinstall": "npm install -C client && npm install -C server"
}

Create the client application

Using the Angular CLI from the root, create the client application and accept the default options.

$ ng new client

And add the following script to client/package.json

"scripts": {
    "debug": "ng serve --host 0.0.0.0 --disable-host-check --poll=100"
}

Create the server application

Now we’ll have to do some actual work. From the root

$ mkdir server
$ cd server
$ mkdir src
$ npm init
$ npm install --save-dev @types/node rimraf tsc-watch typescript

And add the following scripts to server/package.json

"scripts": {
    "predebug": "rimraf dist/*",
    "debug": "tsc-watch --onSuccess \"node --inspect=0.0.0.0:9229 dist/server.js\" --onFailure \"echo WHOOPS! Server compilation failed\""
}

Create a new file server/tsconfig.json

{
    "compileOnSave": false,
    "compilerOptions": {
        "outDir": "dist",
        "baseUrl": "./src",
        "module": "commonjs",
        "target": "es2015",
        "lib": [
            "es2017"
        ],
        "sourceMap": true,
        "strictNullChecks": true,
        "noImplicitAny": false,
        "preserveConstEnums": true,
        "removeComments": true,
        "forceConsistentCasingInFileNames": true
    }
}

Create a new file server/src/server.ts. This will be our “server” but will only be a long running process for now.

let i = 0;
setInterval(() => {
    i++;
    console.log(`Process - ${i}`);
}, 1000);

Running the applications

At this point, you should be able to start up our client and server applications in separate terminals (from the root in separate terminals):

$ npm run debug:server
$ npm run debug:client

0

0

Open http://localhost:4200/ to see the client application

0

Make a change to the app.component.ts client file and watch the client application reload.

0

Make a change to the server.ts file and watch the server application reload.

0

Great, we’re half way there!

Create the Dockerfile file

We will run both the client and server applications from the same docker image. This may seem like the antithesis of docker best practices, but when we eventually go to production the entire project (server and client) will be served from one application - that being the server. But that is a topic for a future blog post.

We’ll create a multi-stage Dockerfile that will first install all of the production (dependency) packages in the image and then install the development (devDependency) packages in the image.

Create a new file in the root Dockerfile

FROM node:12-alpine as base

WORKDIR /var/build

#---------- PRE-REQS ----------
FROM base as prereq

COPY package*.json ./
COPY client/package*.json client/ts*.json client/angular.json client/
COPY server/package*.json server/ts*.json server/

RUN npm install --quiet --unsafe-perm --no-progress --no-audit --only=production

#---------- DEVELOPMENT ----------
FROM prereq as development

RUN npm install --quiet --unsafe-perm --no-progress --no-audit --only=development

## All files will be volume mounted into the container

EXPOSE 4200
EXPOSE 8100
EXPOSE 9229

Wait, what??!! Where are the application files?

That’s right! We aren’t going to copy any of the actual application files into the image. Instead, we’ll volume mount those files into the container using a docker-compose file. This way, we can make changes to the files from our local system and those changes will be reflected in the container!

Create the docker-compose.yml file

Create a new file in the root docker-compose.yml

version: '3.6'
services:
  server:
    build:
      context: .
      target: development
    command: ["npm", "run", "debug:server"]
    container_name: server
    ports:
      - '8100:8100'
      - '9229:9229'
    volumes:
      - './server/src:/var/build/server/src:delegated'
      - './shared/src:/var/build/shared/src:delegated'
  client:
    build:
      context: .
      target: development
    command: ["npm", "run", "debug:client"]
    container_name: client
    ports:
      - '4200:4200'
    volumes:
      - './client/src:/var/build/client/src:delegated'
      - './shared/src:/var/build/shared/src:delegated'

We are creating 2 containers here - server and client - that will be using the same image from our Dockerfile. Each container will get different volumes mounted into the appropriate locations inside of our image. They will also run different commands when they start - debug:server vs debug:client.

Note on “delegated”

You may have noticed an additional property to the docker-compose volume definition - :delegated. This is a performance improvement flag that can be used for Mac computers. For more info, check out the docker documentation.

Running inside of the containers

Now, let’s see if everything is working correctly. Run the following command

$ docker-compose up --build -d

Once this has finished, run the following commands in separate terminals

$ docker logs --tail 100 -f server
$ docker logs --tail 100 -f client

0

Look familiar? :) Yep, we are now running the client and the server in docker containers. To prove that hot reloading is still working, make a change to a client file and a server file and watch the output from the logs reflect those changes! Finally, stop the containers by running the following

$ docker-compose stop

Now for some real magic

Great, so now we have our entire project running inside of docker containers. Let’s take the next step and get hot reloading working at the same time by introducing a shared module between the two projects. This module will contain request and response models for any XHR calls from the client to the server. The reason we separate this out into a separate module is so our client doesn’t have a direct dependency on our server and vice versa.

Create a new folder under the root called shared and add a simple “Model” class to it:

$ mkdir shared
$ cd shared
$ mkdir src

Add the following file - shared/src/model.ts

export class Model {
    public title = 'My Title';
}

Add the following tsconfig file shared/tsconfig.json

{
    "compileOnSave": false,
    "compilerOptions": {
        "outDir": "dist",
        "baseUrl": "./src",
        "module": "commonjs",
        "target": "es2015",
        "lib": [
            "es2017"
        ],
        "sourceMap": true,
        "strictNullChecks": true,
        "noImplicitAny": false,
        "preserveConstEnums": true,
        "removeComments": true,
        "forceConsistentCasingInFileNames": true
    }
}

Now, we’ll create a paths definition for our shared module in the client and server tsconfig files. Add the following to server/tsconfig.json under compilerOptions

"paths": {
    "@shared/*": [
        "../../shared/src/*"
    ]
}

And the following to client/tsconfig.json under compilerOptions (note the difference in the paths to shared/src/*)

"paths": {
    "@shared/*": [
        "../shared/src/*"
    ]
}

Update server/src/server.ts to import the model, instantiate a new model, and add the title to the console log statement

import { Model } from '@shared/model'; // Added

let i = 0;
setInterval(() => {
    const model = new Model(); // Added
    i++;
    console.log(`Title: ${model.title} - ${i}`); // Updated
}, 1000);

Update server/package.json to the new path of the compiled server.js

"debug": "tsc-watch --onSuccess \"node --inspect=0.0.0.0:9229 dist/server/src/server.js\" --onFailure \"echo WHOOPS! Server compilation failed\""

Update client/src/app/app.component.ts to import the model, instantiate a new model, and set the title of the component to Model.title

import { Component } from '@angular/core';
import { Model } from '@shared/model'; // Added

@Component({
    selector: 'app-root',
    templateUrl: './app.component.html',
    styleUrls: ['./app.component.scss']
})
export class AppComponent {
    title = new Model().title; // Updated
}

The part no one tells you about…

So, we’re building the client and server applications with a shared module defined in our tsconfig path configuration and everything’s working great, right? Well, there’s one thing we are missing in the server application. When the TypeScript compiler compiles your code down to good old ES6 JavaScript, it’s going to convert the import statements to require statements. But it is NOT going to change the paths of the require statements. So, when we imported the shared model as import { Model } from '@shared/model'; that is going to be compiled to const model_1 = require("@shared/model"); (To confirm, run npm run debug from the server directory and inspect the files in the dist directory) But, that isn’t a valid npm package or module that node understands and we’ll get an nice error from node telling us so. Luckily, there is a package that’ll handle this easily enough for us - module-alias. This will allow us to define our own custom aliases for npm packages.

In the server, directory, run the following:

$ npm install module-alias

Then add the following node to the root of the server/package.json

"_moduleAliases": {
    "@shared": "dist/shared/src"
}

And finally update the debug script to register module-alias in the node command

"debug": "tsc-watch --onSuccess \"node -r module-alias/register --inspect=0.0.0.0:9229 dist/server/src/server.js\" --onFailure \"echo WHOOPS! Server compilation failed\""

Putting it all together

Then rebuild the images and containers and then run the logs

$ docker-compose up --build -d 

$ docker logs --tail 100 -f client
$ docker logs --tail 100 -f server

Finally, make a change to shared/src/model.ts, namely change the title to something else and watch both the server and client reload at the same time with the changes reflected in your model title!

0

Conclusion

And that’s a wrap! We have both our server and client applications running and hot reloading entirely in Docker containers to ease local development. When it’s time to go to production, we will use the same docker image that we’ve been using for local development but we’ll copy the application files into our image instead of using volume mounts. But that is a topic for a future post.

Let me know if you’ve been able to implement a similar strategy in your projects in the comments below.