Some reasonable DevOps to keep your Elixir app running. This is a follow up to deploying an elixir app to a VPS.

At this point you should have an application that is accessible from the Internet, and seemingly works just fine. However there are still multiple problems to be fixed and improvements to be made to save you some headache in the future, should anything go wrong. Let’s go through these one at a time and quickly improve our setup.

Distillery 2.0

If you have multiple production or staging instances, or just don’t want to keep sensitive data in configs (so in the code in the repo), you will want to use environment variables on your server instead. Distillery 2.0 came out not too long ago, and it provides us with an elegant solution to this, plus it’s always good to keep your libraries up to date.

Start by updating your mix.exs if needed:

{:distillery, "~> 2.0"}

and fetch the new dependency:

$ mix deps.get

To make a long story of Distillery 2.0 new features described in the above link short, and only take parts we’re currently interested in: we’ll add a config provider that will read environment variables at runtime, and apply them as if they were hard-coded in config all along.

Let’s add to rel/config.ex:

set config_providers: [
  {Mix.Releases.Config.Providers.Elixir, ["${RELEASE_ROOT_DIR}/etc/config.exs"]}
]

set overlays: [
  {:copy, "rel/config/config.exs", "etc/config.exs"}
]

Now create a file rel/config/config.exs and simply put your variable config Elixir code there, which will be executed at runtime and treated as if it was in your regular old config all along.

For example, you probably have Endpoint and Repo configs similar to these in your config/prod.exs:

config :myapp, Myapp.Endpoint,
  http: [port: 8888],
  url: [host: "127.0.0.1", port: 8888],
  cache_static_manifest: "priv/static/manifest.json",
  secret_key_base: "rkb5NLnoB1jXI5hDYnpG9Q",
  server: true,
  root: "."

config :myapp, Myapp.Repo,
  adapter: Ecto.Adapters.Postgres,
  username: "elixir_app_user",
  password: "some_password",
  database: "elixir_app_production",
  hostname: "localhost",
  pool_size: 10

To have these hard-coded values fetched from the production server’s environment at runtime your rel/config/config.exs will look like this:

use Mix.Config

config :myapp, Myapp.Endpoint,
  secret_key_base: System.get_env("SECRET_KEY_BASE")

config :myapp, MyApp.Repo,
  username: System.get_env("DATABASE_USER"),
  password: System.get_env("DATABASE_PASS"),
  database: System.get_env("DATABASE_NAME")

You can choose any properties to include (or not) in rel/config/config.exs - they will simply be merged with the existing ones in config/prod.exs when running the app on your production server.

To prevent some possibly difficult to catch issues with missing environment variables, you can add a helper function that will inform you that something is missing - you declare that function in rel/config/config.exs too - it’s just plain Elixir:

use Mix.Config

defmodule Helpers do
  def get_env(name) do
    case System.get_env(name) do
      nil -> raise "Environment variable #{name} is not set!"
      val -> val
    end
  end
end

config :myapp, Myapp.Endpoint,
  secret_key_base: Helpers.get_env("SECRET_KEY_BASE")

config :myapp, MyApp.Repo,
  username: Helpers.get_env("DATABASE_USER"),
  password: Helpers.get_env("DATABASE_PASS"),
  database: Helpers.get_env("DATABASE_NAME")

That’s it, once you build the app again it will be ready to use environment variables as config at runtime without any issues.

Non-root user

If you followed my previous deployment guide, the app is being started by the root user, and it has root permissions. Not only the app doesn’t require root permissions at all to work properly, it also poses a risk of someone taking over the app consequently receiving root permissions over the whole server. The simplest solution to this is to create a new user, grant it permissions to the app’s directory, switch to the new user and start the app. But we can still do better than that.

For now, let’s create a new user for our app on our production server with its home directory being our app’s directory:

$ useradd -d /home/myapp_directory myapp_user

and grant it permissions to the app’s directory:

$ sudo chmod a+rwx /home/myapp_directory

The user has no password, and you should always disable login using password in SSH anyway, so in order to log in on the new user via SSH you have to create a file /home/myapp_directory/.ssh/authorized_keys with your public SSH key. If you’re currently connected to your server via SSH on a root account, just copy contents of /root/.ssh/authorized_keys.

systemd

Right now we simply start the app, but we don’t monitor it in any way, so the whole Erlang VM can potentially crash and we would have to restart it manually (once an angry client informs us). Additionally we’re using very limited default Elixir’s logging - the 4 rotating files in our app’s /var/log directory, with 100KB limit per file, so the oldest log lines are just deleted. We’ll use systemd to start our app as a service, have it monitored for us, automatically restart if it crashes, and properly store logs.

Systemd does a lot more than that, but let’s focus just on the few features we need now.

Start by creating a .service file with your app’s name in /etc/systemd/system/, like this:

$ sudo touch /etc/systemd/system/myapp.service

Then edit it according to the following template:

[Unit]
Description=My App

[Service]
Type=simple
User=myapp_user
WorkingDirectory=/home/myapp_directory
ExecStart=/home/myapp_directory/bin/myapp foreground
ExecReload=/home/myapp_directory/bin/myapp restart
ExecStop=/home/myapp_directory/bin/myapp stop
Restart=on-failure
RestartSec=5
SyslogIdentifier=myapp
EnvironmentFile=/etc/environment

As you may have guessed, you should now add the environment variables that we were talking about earlier in /etc/environment.

If the app is still running “the old way”, stop it:

$ /home/myapp_directory/bin/myapp stop

And now we can start our app using

$ sudo systemctl start myapp.service

At this point I suggest dropping edeliver altogether in favor of a custom-scripts-based deployment - in my opinion it’s easier to create scripts tailored specifically for your app and your setup, than to work your way through customizing edeliver. You will only need two simple scripts:

1. on your local machine - build_release.sh: build the release package, copy it to the server, run the server’s script

#!/bin/bash

mix release --env=prod --verbose
scp _build/prod/rel/myapp/releases/0.0.1/myapp.tar.gz root@my.production.server:~/
ssh root@my.production.server './home/myapp_directory/run_release.sh'

Of course if you’re using Docker to build your release, this script will be a little different, but it’s still just a few simple bash commands anyway.

2. on your remote server - run_release.sh: stop myapp.service, extract the package, start myapp.service (will be expanded in the next few steps)

#!/bin/bash

sudo systemctl stop myapp.service
tar -xvf ~/myapp.tar.gz
sudo systemctl start myapp.service

And make it executable:

    $ chmod +x run_release.sh

At this point running ./build_release.sh should re-deploy your app properly. Also you can view you app’s logs continuously using journalctl -fu myapp.

It’s worth mentioning that an even better solution for deployment here would be setting up a continuous integration delivery pipeline to automate everything even more, but that in itself is a topic way too broad for this article.

OSSEC

Even if our app is now recovering from crashes, there still might be some that avoided getting caught by our probably non-existent QA, but are potentially destructive to the data. Then there’s also a threat of our server being intentionally attacked, and we wouldn’t even know it happened either. To handle these problems we’ll install OSSEC - an Open Source Host-based Intrusion Detection System, which will more or less analyze system logs (including our app’s logs), and send us an email if something suspicious or unexpected happens.

OSSEC really offers A LOT, but thankfully the installation is pretty simple. Just download the archive (check if there’s a more recent version!)

$ wget -U ossec https://bintray.com/artifact/download/ossec/ossec-hids/ossec-hids-2.8.3.tar.gz

Extract the archive and run the installation script:

$ tar -zxvf ossec-hids-2.8.3.tar.gz
$ cd ossec-hids-2.8.3
$ ./install.sh

In step 1 chooselocal - as this is the server you’re running your app on. In step 3.1 chooseyes to email notifications, and enter your email. Choosing the default or “yes” for the remaining options should work just fine.

After the installation script finishes, you can start OSSEC using:

$ /var/ossec/bin/ossec-control start

Check your email inbox for your first alert - informing you that the OSSEC server has started.

Database backups

Hosting services like AWS offer rather reliable cloud-based databases, but they’re still not guaranteed to never fail randomly. Besides that, you might accidentally break the database yourself, your app might do something unexpected to the data, or some user might just request to restore deleted data. It’s pretty much always preferable to have some kind of an automatic database backup system than nothing.

Let’s not reinvent the wheel here, if you’re using PostgreSQL (and you most likely are), just grab these scripts for Automated Backup on Linux. Copy these scripts for example to /home/backups, and edit pg_backup.config contents according to your app. Keep an eye out for the total number of backups stored - you might need to adjust it later.

Remember to also make the script executable:

$ chmod +x /home/backups/pg_backup_rotated.sh

Then the last step is having rotated backups be run automatically using cron. Open crontab in edit mode:

$ VISUAL=vim crontab -e

Paste this line on the very bottom, which will run the backup script every day at 6 AM, which creates backups and deletes outdated ones according to the config.

0 6 * * * /home/backups/pg_backup_rotated.sh

Then save and quit.

App backups

Besides backing up the database, you might also want to save your past app releases - in case your newest release is faulty, and you need to quickly get it back working. We’ll amend the run_release.sh script to automatically put new releases in their own timestamped directory, and then create a symbolic link for the latest version to the current directory, which will be used by systemd to run the app.

First stop myapp.service if it’s running:

$ sudo systemctl stop myapp.service

Then edit run_release.sh to backup the current release in a directory with a timestamp:

#!/bin/bash

export TIMESTAMP=`date +%Y%m%d_%H%M%S`
mkdir -p /home/myapp_directory/releases/$TIMESTAMP
cd /home/myapp_directory/releases/$TIMESTAMP
tar -xvf ~/myapp.tar.gz
sudo systemctl stop myapp.service
rm /home/myapp_directory/releases/current
ln -s /home/myapp_directory/releases/$TIMESTAMP /home/myapp_directory/releases/current
sudo chmod a+rwx /home/myapp_directory/releases/current
sudo systemctl start myapp.service

And change directories specified in /etc/systemd/system/myapp.service to the new current directory:

WorkingDirectory=/home/myapp_directory/releases/current
ExecStart=/home/myapp_directory/releases/current/bin/myapp foreground
ExecReload=/home/myapp_directory/releases/current/bin/myapp restart
ExecStop=/home/myapp_directory/releases/current/bin/myapp stop

Now if the newest release is broken, you just have to stop it via systemctl, change the symlink for the current verion directory to an older version, and start the app again.

Firewall

If your production server is running Ubuntu, you should already have Uncomplicated Firewall installed. A basic setup for it is really simple.

First make sure nobody enabled UFW in the past:

$ sudo ufw status

Should display Status: inactive. Then check for services that were registered with UFW:

$ sudo ufw app list

This will most likely display these (at least):

Available applications:
  Nginx Full
  Nginx HTTP
  Nginx HTTPS
  OpenSSH

Now we’ll set these default rules:

$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing

And allow ports used by SSH and Nginx:

$ sudo ufw allow 'OpenSSH'
$ sudo ufw allow 'Nginx Full'

Lastly, start the firewall:

$ sudo ufw enable

SSL certificates

We can save us some low-to-moderate pain of setting up SSL manually by using certbot. Even if you’ve already set it up, consider automating it further with certbot, especially if you’re using 90-day lifetime certificates from letsencrypt.

Install certbot:

$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx

Since we’re using Nginx, setting up certbot is as simple as doing:

$ sudo certbot --nginx

This also adds a cron job that will check for certificates that are near expiration date, and renew them if needed.

Describe your setup

Write down what you have done here, so both other developers will have much easier time changing anything on the server, and also you will forget what exactly you’ve done here in a few months. Put these details in your app’s README.md.

Post by Konrad Piekutowski

Konrad has been working with AmberBit since beginning of 2016, but his experience and expertise involves Ruby, Elixir and JavaScript.