I've always loved the idea of containers, and have written papers in grad school about their various forms, from logical storage buckets to standardized units of capitalism. But how technology adopts the concept of standardization works best: wrap code and the dependencies it needs to run in its own environment, and connect it to its siblings over an internal network. Build the containers when code changes, deploy and destroy them often. With a little work at persistence, its a great way to develop things faster and put them in real-world environments. Doing so from a laptop is even nicer.
Using Docker images and a node server, dynamically generate a suuuuper dumb dummy application running in containers. The app won't have any data, but this system works to connect a rails or nodejs server, a postgres database, and an nginx load balancer -- a real-world setup that could easily scale to millions of users.
After testing locally, deploy the same app to a Kubernetes cluster and see how it could scale to billions.
The simplest way to install docker is through their website, which provides a full taskbar application for managing containers locally. Containers start as normal code that we use everywhere, and have to be built into a container image before we can use it. Below is a simple node server that responds to requests with an environment variable, and a Dockerfile to package it.
The node server responds to requests with the MESSAGE
environmental variable. We can use this to test multiple deployments.
// ./node-server-container/index.js
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.end(`<h1>${process.env.MESSAGE}</h1>`);
}).listen(8000);
The Dockerfile
specs how the container image will behave. In this case, it uses a node
image, makes a directory, copies the js
file, opens the port, and starts the script. Once this image is built, we can deploy 100s of copies of the container, over and over.
# ./node-server-container/Dockerfile
FROM node
RUN mkdir -p /usr/src/app
COPY index.js /usr/src/app
EXPOSE 8000
CMD [ "node", "/usr/src/app/index" ]
Define an nginx.conf
with upstream servers outlined. NGINX will proxy requests to the upstream
block, which contains the node app. We can access the node containers by name in docker:
upstream our-cool-app {
server server1:8000;
server server2:8000;
}
server {
listen 8080;
server_name our-cool-app;
location / {
proxy_pass http://our-cool-app;
proxy_set_header Host $host;
}
}
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/default.conf
version: '3.2'
services:
# Build node server Containers
server1:
build: ./node-server-container
tty: true
environment:
- 'MESSAGE=no. 1'
volumes:
- './simple-server/data:/docker-vol'
server2:
build: ./node-server-container
tty: true
environment:
- 'MESSAGE=no. 2'
volumes:
- './simple-server/data:/docker-vol'
# Build nginx Container
nginx:
build: ./nginx-container
tty: true
links:
- server1
- server2
ports:
- '8080:8080'
docker-compose up --build
Without getting into building a Rails app (I have some separate posts about that), get a barebones Rails API connected to postgres. Add models, data persistence and seeders later.
Gemfile
# Gemfile
ruby '2.5.1'
gem 'rails', '~> 5.2.2'
gem 'puma', '~> 3.11'
touch Gemfile.lock
The database host
field needs to be the same name as its container, which gets defined in docker-compose.yml
. We will use postgres
to keep it simple.
# config/database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: postgres
username: worker
password: <%= ENV['PG_PASSWORD'] %>
development:
<<: *default
database: base
test:
<<: *default
database: base
production:
<<: *default
database: base
# Dockerfile
FROM ruby:2.5.1
RUN apt-get update -qq && apt-get install -y build-essential postgresql
RUN mkdir /rails-app
WORKDIR /rails-app
ADD ./Gemfile /rails-app/Gemfile
ADD ./Gemfile.lock /rails-app/Gemfile.lock
RUN bundle install
We can use a custom image to spin up multiple databases in the same container (for rails' dev, test, and prod), but for now will just use Docker's default image.
In the docker-compose.yml
file, add:
version: '3.2'
services:
# Build Rails Container
rails:
build: ./rails
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
environment:
PG_PASSWORD: foobarbat
ports:
- '3000:3000'
depends_on:
- postgres
volumes:
- .:/rails/docker-vol
# Build Postgres Container
# (Use the same name as defined in Rails' Database config host)
postgres:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: foobarbat
POSTGRES_USER: worker
POSTGRES_DB: base
ports:
- '5432:5432'
volumes:
backend: