Home Using Vault to store LUKS keys - Part One
Post
Cancel

Using Vault to store LUKS keys - Part One

This is a continuation from my previous postt about the need to have automated way of storing and pull LUKS keys for my servers.

In this post, we are going actually start getting things moving by setting up the consul backend.

I will preface this by saying, i really would not recommend using this setup in production without some major reworking. The internal comms are currently not using TLS for starters. That will be added very shortly, but at the moment this was more a proof of concept.

Setting up the Consul cluster

You will more or less be using the exact same config across all servers for Consul. This should also be quite easy to script / automate. Its something i intend on looking into in the future.

Grab the latest version of Consul:

1
wget https://releases.hashicorp.com/consul/1.6.1/consul_1.6.1_linux_amd64.zip

Unzip it:

1
unzip consul_1.6.1_linux_amd64.zip

Move it to /usr/local/bin:

1
mv consul /usr/local/bin

Set the permissions:

1
chown root:root /usr/local/consul

Test that it works:

1
consul --version

You should get an output similar to this:

1
2
3
4
[root@consul1 ~]# consul --version
Consul v1.6.1
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

You can also add the autocomplete to your shell running the following 2 commands:

1
2
consul -autocomplete-install
complete -C /usr/local/bin/consul consul

Now create the required directories:

1
2
mkdir -p /etc/consul.d/scripts
mkdir /var/consul

Now lets create the config file for Consul, here is what im using:

{
    "bootstrap_expect": 3,
    "client_addr": "0.0.0.0",
    "datacenter": "Haynet",
    "data_dir": "/var/consul",
    "domain": "consul",
    "enable_script_checks": true,
    "dns_config": {
        "enable_truncate": true,
        "only_passing": true
    },
    "enable_syslog": true,
    "encrypt": "CHANGE ME",
    "leave_on_terminate": true,
    "log_level": "INFO",
    "rejoin_after_leave": true,
    "server": true,
    "start_join": [
        "192.168.0.191",
        "192.168.0.193",
        "192.168.0.72"
    ],
    "ui": true
}

You can generate a good encryption key with:

1
consul keygen

Then paste it inbetween the double quotes in the “encrypt” section.

You will want the same encryption key on all consul servers.

Next we need to create the systemd service file, open up the following file in your favourite text editor:

1
/etc/systemd/system/consule.service

and paste in the following:

1
2
3
4
5
6
7
8
9
10
11
[Unit]
Description=Consul Startup process
After=network.target
 
[Service]
Type=simple
ExecStart=/bin/bash -c '/usr/local/bin/consul agent -config-dir /etc/consul.d/'
TimeoutStartSec=0
 
[Install]
WantedBy=default.target

Final bit of preperation we need to do on the server is to open up a couple of firewall rules, if you are using CentOS7 like me, you can just paste in the following. If you arent, you can see what ports needs to be allowed:

1
2
3
4
5
6
7
8
9
10
firewall-cmd --permanent --add-port=8300/tcp
firewall-cmd --permanent --add-port=8301/tcp
firewall-cmd --permanent --add-port=8301/udp
firewall-cmd --permanent --add-port=8302/tcp
firewall-cmd --permanent --add-port=8302/udp
firewall-cmd --permanent --add-port=8400/tcp
firewall-cmd --permanent --add-port=8500/tcp
firewall-cmd --permanent --add-port=8600/tcp
firewall-cmd --permanent --add-port=8600/udp
firewall-cmd --reload

Lets start it up and see what happens!

1
2
3
systemctl daemon-reload
systemctl enable consul
systemctl start consul

If you used the same config as me, the UI will be running on port 8500 on the servers.

That should be it for the Consul Servers.

Head over to the next section to setup your vault servers.

This post is licensed under CC BY 4.0 by the author.