Velero backup to TrueNAS S3 with free Let's Encrypt certificate

In one of my previous blogposts, I explained how to install and configure Velero with TrueNAS as an S3 storage backend. My whole blogpost was focused on running everything on the local network. Since I have my K8s cluster running on a VPS I needed to find a way to make my TrueNAS S3 storage securely available from remote. I actually started writing this blogpost just a few days after my first blogpost about Velero. Initially I simply wanted to use a self-signed certificate on my TrueNas appliance but I just couldn't get it working with Velero. For this to work, I would need to install a proper TLS certificate. While I'm successfully using letsencrypt in my Kubernetes cluster to dynamically create certificates for my services, I thought it would be a good idea to explore the options with letsencrypt on TrueNas Scale. So this article will explain how to do this by using a free DynDNS account from dynv6.com and obtaining a free TLS certificate using Letsencrypt.
You can also use the steps explained in this article on your local network. Also you can just use the steps to secure your S3 storage on your TrueNas appliance. In case you also want to access your S3 storage from remote, make sure to forward ports 9000 (API) and 9001 in your router to your TrueNAS server. It is advisable to use non default ports and if your router supports it to lock down the access from external only to certain IPs to keep scriptkiddies out.
It is important that you change the FQDN of your TrueNas appliance under Network -> Global Configuration -> Settings before proceeding. For example we will use a FQDN of truenas1337.dynv6.net.
While there is built-in Letsencrypt support in TrueNas Scale under Credentials -> Certificates -> ACME DNS Authenticator, there is currently only support for Cloudflare and Route53 (Amazon). Since I will be using dynv6.com, our approach will be to use scripts. There seem to be a couple of ways and scripts on the WWW to achieve this. I will be using the acme.sh script. First we need to obtain the script. Make sure you enter a valid email address.
curl https://get.acme.sh | sh -s email=valid.email@domain.com
I have a free DynDNS account at dynv6.com. The acme script supports many other DNS service providers. See here for a list:
So if you don't have a DynDNS or DNS account yet, you can create one for free at https://dynv6.com. I will not go into the details here as it is pretty much self-explanatory.
Once you have the account, you need to create an API key at your DNS provider website. For dynv6.com you can go here:
Next, ssh to your TrueNas appliance and set the API key you just created as an variable:
export DYNV6_TOKEN="XXXXXXXXXXXXXX"
Then we execute the acme.sh script with the FQDN and the API key that we created at dynv6.com
/root/.acme.sh/acme.sh --issue --dns dns_dynv6 -d truenas1337.dynv6.net
This will take a while, just be patient. :)
The output should tell you that you have successfully acquired a certificate:
[Fri Jul 14 20:30:03 CEST 2023] Your cert is in: /root/.acme.sh/truenas1337.dynv6.net_ecc/truenas1337.dynv6.net.cer
[Fri Jul 14 20:30:03 CEST 2023] Your cert key is in: /root/.acme.sh/truenas1337.dynv6.net_ecc/truenas1337.dynv6.net.key
[Fri Jul 14 20:30:03 CEST 2023] The intermediate CA cert is in: /root/.acme.sh/truenas1337.dynv6.net_ecc/ca.cer
[Fri Jul 14 20:30:03 CEST 2023] And the full chain certs is there: /root/.acme.sh/truenas1337.dynv6.net_ecc/fullchain.cer
Now that we have the certificate we only need to import it in TrueNAS. For this we use another script.
git clone https://github.com/danb35/deploy-freenas
The script will make an API call to your TrueNas appliance and install the certificate. For this we also need an API key from the TrueNas appliance. You can create it under API Keys by clicking on the user icon in the upper right corner in the TrueNas WebUI.
You can call it whatever you want. Make sure to copy the API key, it will only be displayed once. If you forgot to copy it, simply delete it and create a new one. :)
Next, create a configuration file called deploy_config. The git repo contains example (deploy_config.example). We will copy this and modify it.
cd deploy-freenas
cp deploy_config.example deploy_config
vi deploy_config
Remove the # in front of api_key and add the API key that you generated earlier on your TrueNas appliance.
Add # in front of password = YourSuperSecurePassword#@#$* to disable the password option.
I ran into errors running the script since it was not able to get the FQDN correctly and the acme.sh script placed the certs in a directory called "/root/.acme.sh/truenas1337.dynv6.net_ecc" while the script would not expect "_ecc". To fix this I simply edited the cert_fqdn = truenas1337.dynv6.net in deploy_config and created a symlink:
ln -sf /root/.acme.sh/truenas1337.dynv6.net_ecc /root/.acme.sh/truenas1337.dynv6.net
ln -sf /root/.acme.sh/truenas1337.dynv6.net_ecc /root/.acme.sh/truenas1337
Now we can call the script:
/root/.acme.sh/acme.sh --install-cert -d truenas1337.dynv6.net --reloadcmd "~/deploy-freenas/deploy_freenas.py"
If you want to be presented with a valid certificate when accessing TrueNas from your local network, one more step needs to be done. You need to add the FQDN with the local IP of your TrueNas appliance to the /etc/hosts file on your local machine.
% cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
192.168.10.254 truenas1337.dynv6.net truenas1337
If you now access the TrueNas WebUI from your local machine, it should now show a valid certificate.
For S3 make sure to select the Letsencrypt certificate in the System Services -> Services -> S3 configuration.
Finally we need to add a cron job that triggers the cert renewal on a weekly basis. For this, navigate to System Settings -> Advanced -> Cron Jobs -> Add. Disregard the Warning message by selecting “Close.”
You can stop reading here if you only want to access TrueNas and or S3 from your local network. If you also want to access it from remote, simply add a port-forwarding in your router! Keep on reading if you want to know how to configure Velero.
Before we install velero, we should make a few tests to see if we can successfully reach the S3 storage of our TrueNas appliance with a valid TLS certificate. From our remote VPS we can do a connection test using curl to make sure that we see a valid certificate.
curl -vvI https://truenas1337.dynv6.net
cc
If that is successful, we can use awscli to test the access to the S3 storage.
sudo yum -y install awscli
To configure awscli we run
aws configure --profile=truenas
A wizard will now ask for access and secret key. We accept the defaults for Region etc. by pressing ENTER
[vagrant@rocky8-k3s ~]$ aws configure --profile=truenas
AWS Access Key ID [None]: B9A6AC13381DAC41BAD
AWS Secret Access Key [None]: <your secret key>
Default region name [None]:
Default output format [None]:
aws --profile=truenas --endpoint=https://truenas1337.dynv6.net:9000 s3 ls s3://velero
If that is the case we can continue by installing Velero using HELM.
First we need to create a S3 credentials file .credentials-velero with the following content:
cat > .credentials-velero << EOF
[default]
aws_access_key_id = B9A6AC13381DAC41BAD
aws_secret_access_key = <your secret key>
EOF
Velero will use this to authenticate against the S3 bucket.
In order to interact with Velero we need to download the Velero binary. It is available for multiple platforms in form of a tar file.
Latest release can be found here: https://github.com/vmware-tanzu/velero/releases/latest
I will use v1.10.2
wget https://github.com/vmware-tanzu/velero/releases/download/v1.10.2/velero-v1.10.2-linux-amd64.tar.gz
Extract it and move the "velero" binary to the /usr/local/bin folder
tar xvzf velero-v1.10.2-linux-amd64.tar.gz
sudo mv velero-v1.10.2-linux-amd64/velero /usr/local/bin/
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm upgrade --install velero vmware-tanzu/velero \
--namespace velero \
--create-namespace \
--set-file credentials.secretContents.cloud=./.credentials-velero \
--set configuration.provider=aws \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=velero \
--set configuration.backupStorageLocation.config.region=None \
--set configuration.backupStorageLocation.config.s3ForcePathStyle=true \
--set configuration.backupStorageLocation.config.s3Url=https://truenas1337.dynv6.net:9000 \
--set configuration.defaultVolumesToFsBackup=true \
--set snapshotsEnabled=false \
--set deployNodeAgent=true \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=velero/velero-plugin-for-aws:latest \
--set initContainers[0].imagePullPolicy=IfNotPresent \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
--version 3.1.6
I'm not installing the latest version of Velero, since I'm not sure if my HELM command line switches will work with the latest version. Feel free to try it out. :)
Finally let's run a backup job
$ ./velero backup create ghost-tls --include-namespaces ghost --wait
Backup request "ghost-tls" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
..
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe ghost-tls` and `velero backup logs ghost-tls`.
$ ./velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
ghost-tls Completed 0 2 2023-07-14 23:25:57 +0200 CEST 29d default <none>
Sure, the effort is not exactly little but now you have a TLS secured S3 storage that you can use not only for Valero but also for other applications that support S3 storage.
Sources
