It was finally time to replace my two old Odroid HC2s with something newer. I had them both running with a 3.5 inch 2TB hard drive each and they ran really well for a long time. But what never really worked well was ZFS. Who does not know ZFS, very briefly it is a file system. Those who know it will now say: "Wait a moment it is much more than just a file system!" And this is true. :) I don't want to go into depth here but you can configure different raid levels, caching, snapshots and replication with it. And mainly because of the replication it was very suitable for my use case. The Odroid HC2 only has a single SATA port and can therefore only handle a single hard drive. If you want to back up your data, you either have to use an external disk on the USB 2.0 port, which is of course much too slow, or you have to mirror (replicate) it to another device. ZFS replication was my first choice for this. For this I used SANOID / SYNCOID. IMHO a very cool tool. It worked really well, but since the HC2 only has 2 GB of RAM built in, it is just too slow for ZFS. ZFS simply needs a lot of RAM. At most I got a transfer rates of only 50MB/s. If I would have used a different file system like EXT4 or XFS, I would have easily achieved 100MB/s. The hard disk and the 1Gbit port would be the bottleneck. Furthermore, I have a few Docker containers running, including a Teamspeak server, Wiki.js and some Grafana stuff to monitor my Router and Homelab. So something new was needed. I had been playing with the idea of buying an Odroid H2 for a while. Unlike the HC2, it has a Quad-core Intel Celeron processor, 2 SATA and USB 3.0 ports and can be expanded with RAM modules up to 32GB. I found a good offer and finally bought it. Now all I had to do was migrate my data and Docker Containers. On the HC2 I had OpenMediaVault installed. A very cool and simple NAS OS. It worked quite well but I always wanted to use TrueNAS, which I already use on my ESXi server in my Homelab. There is TrueNas Core, which is based on FreeBSD and there is TrueNAS Scale based on Debian Linux. Unfortunately, this is not (yet) available for ARM. Since I also wanted to use Kubernetes, I went ahead and tried to install TrueNAS Scale which is based on Debian Linux instead of FreeBSD. It worked right away. I installed it on a USB 3.0 stick and only installed the hard disk of one of the HC2. I was even able to import the ZFS pool I created under OMV from the disk into TrueNAS Scale. To create a mirrored pool I would need both disks of both HC2, but since there is no official way to add the disk from the GUI and I was a bit afraid to loose my data, I first had to sync my data from the H2 to an external USB disk where I created a ZFS pool. Once again the H2 luckily has a USB 3.0 port and so the sync of my 800GB of data went pretty fast. I simply plugged in a USB disk, created a zpool on it and used the replication feature from TrueNAS Scale and after a couple of hours my 800GB data were synced. Nice. Once I had my data synced to the USBK disk, I simply put in the second disk, destroyed and recreated the local ZFS pool and simply synced my data back from the USB disk. Voila we are done with part I.

Container Migration

Migrate my Teamspeak container

Let's start very simple by migrating the teamspeak container. Actually I have not truly migrated it. I didn't really care about my passwords, channels and all, so i simply spun up a new container. In case you want to migrate your teamspeak data, simply copy the folder from your old server to your new one and mount it in the container. I logged in to the TrueNAS WebUI and went to Apps. If it is your first time accessing Apps you will be prompted to select a ZFS pool where the apps will be stored.

I selected my pool "zpool0" and clicked on "Launch Docker Image" in the upper right corner. This will open up a new view where we can choose various options beginning with Application Name.

We leave "Container Entrypoint" as is and enter the following under "Container Envrionvent Variables".

Then under "Networking" again leave everything as is and move on to "Port Forwarding". Here we need to tell TrueNAS which ports shall be forwarded to local ports. I just forwarded the default Teamspeak ports.

Under "Storage" we need to bind at least one local directory (Host Path Volume) to make our container persistent. I have created a ZFS dataset called k8s to hold all my Host Path Volumes.

Application Name: teamspeak3
Container Image: teamspeak
Container Environment variables: TS3SERVER_LICENSE:accept
Port Fordwarding: 9987:9987 UDP, 10011:10011 TCP, 30033:30033 TCP

The rest of the options we leave untouched and finally click on "Save" to start the container. On the shell you can run kubectl commands like so:

sudo k3s kubectl get pods -A

Example

root@truenas[~]# k3s kubectl get pods -A
NAMESPACE                  NAME                                       READY   STATUS    RESTARTS   AGE
kube-system                openebs-zfs-node-4nxmw                     2/2     Running   7          7d23h
metallb-system             speaker-fjqng                              1/1     Running   7          7d23h
prometheus-operator        prometheus-operator-5c7445d877-ttmxm       1/1     Running   1          8d
kube-system                coredns-d76bd69b-t9mhl                     1/1     Running   0          8d
kube-system                intel-gpu-plugin-rxwl5                     1/1     Running   0          7d23h
metallb-system             controller-7597dd4f7b-kn2c9                1/1     Running   0          8d
cnpg-system                cnpg-controller-manager-854876b995-l5n2r   1/1     Running   0          8d
kube-system                openebs-zfs-controller-0                   5/5     Running   0          7d23h
ix-teamspeak3              teamspeak3-ix-chart-59779956f-b74zs        1/1     Running   0          7d22h
ix-piehole                 piehole-pihole-85d8595bbc-2kw4c            1/1     Running   0          7d22h
ix-docker-compose-wikijs   docker-compose-wikijs-0                    1/1     Running   0          24h

To check the logs of the pod run:

sudo k3s kubectl logs -n ix-teamspeak teamspeak3-ix-chart-59779956f-b74zs

Example

root@truenas[~]# sudo k3s kubectl logs -n ix-teamspeak3 teamspeak3-ix-chart-59779956f-b74zs | more
2022-11-26 18:00:14.110467|INFO    |ServerLibPriv |   |TeamSpeak 3 Server 3.13.7 (2022-06-20 12:21:53)
2022-11-26 18:00:14.110859|INFO    |ServerLibPriv |   |SystemInformation: Linux 5.10.142+truenas #1 SMP Mon Sep 26 18:20:46 UTC 2022 x86_64 Binary: 64bit
2022-11-26 18:00:14.110953|INFO    |ServerLibPriv |   |Using hardware aes
2022-11-26 18:00:14.164989|INFO    |DatabaseQuery |   |dbPlugin name:    SQLite3 plugin, Version 3, (c)TeamSpeak Systems GmbH
2022-11-26 18:00:14.166008|INFO    |DatabaseQuery |   |dbPlugin version: 3.11.1
2022-11-26 18:00:14.189378|INFO    |DatabaseQuery |   |checking database integrity (may take a while)
2022-11-26 18:00:14.447669|WARNING |Accounting    |   |Unable to open licensekey.dat, falling back to limited functionality
2022-11-26 18:00:14.469239|INFO    |Accounting    |   |Licensing Information
2022-11-26 18:00:14.469352|INFO    |Accounting    |   |licensed to       : Anonymous
2022-11-26 18:00:14.469418|INFO    |Accounting    |   |type              : No License
2022-11-26 18:00:14.469499|INFO    |Accounting    |   |starting date     : Tue Feb  1 00:00:00 2022
2022-11-26 18:00:14.469568|INFO    |Accounting    |   |ending date       : Thu Jul  1 00:00:00 2027
2022-11-26 18:00:14.469619|INFO    |Accounting    |   |max virtualservers: 1
2022-11-26 18:00:14.469672|INFO    |Accounting    |   |max slots         : 32
2022-11-26 18:00:15.559367|INFO    |              |   |Puzzle precompute time: 1019
2022-11-26 18:00:15.560746|INFO    |FileManager   |   |listening on 0.0.0.0:30033, [::]:30033
2022-11-26 18:00:15.570501|INFO    |Query         |   |Using a query thread pool size of 2
2022-11-26 18:00:15.671846|INFO    |VirtualServerBase|1  |listening on 0.0.0.0:9987, [::]:9987
2022-11-26 18:00:15.672418|INFO    |Query         |   |listening for query on 0.0.0.0:10011, [::]:10011
2022-11-26 18:00:15.672715|INFO    |CIDRManager   |   |updated query_ip_allowlist ips: 127.0.0.1/32, ::1/128,
2022-11-26 18:00:17.190187|INFO    |              |   |myTeamSpeak identifier revocation list was downloaded successfully - all related features are activated

If it is the first time that you deployed the teamspeak container, you will also see the login information in the logs. This was quite simple, now lets see how we can run docker-compose for the Wiki.js container.

Running Docker-Compose

I have a docker-compose file for my Wiki.js container. In order to run this under TrueNAS, we first need to add the TrueCharts Catalog. These are unofficial Apps that you can install in TrueNAS. The way this works is that we install the docker-compose App. This will start a pod, where we will mount the local folder where the docker-compose file resides. Inside that pod docker-compose will be executed. So we are actually running docker inside docker. From a security perspective this is not best practice but since this is only for a private home server I don't mind.

Add the TrueCharts Catalog

Go to Apps and click on "Add Catalog" in the upper right corner.

Then we need to put in:

Catalog Name: truecharts
Repository: https://github.com/truecharts/catalog

On my system it took quite some time (15 minutes) until the catalog was finally added. I assume this is due to the fact that I use spinning disks for my zpool. It should be a lot faster on SSDs.

Install docker-compose App

Now back to Apps and install the docker-compose App. By doing so you will be prompted with options to select. I entered the following:

Application Name: docker-compose-wikijs
Timezone: Europe/Berlin
Docker Compose File: /mnt/docker-compose.yml -> This is the docker-compose file
Host Path: /mnt/zpool0/Backup/Docker/wiki.js -> This is the local path on the TrueNAS server where the docker-compose file resides
Mount Path: /mnt -> This will be the local directory in the docker-compose pod

The rest we just leave as it is and click on "Save". This will now start a docker-compose-wikijs pod. Within this pod, docker-compose will be executed and the two containers from my docker-compose.yml file will be started. So whenever we want to interact with the Wiki.js containers, we first have to exec into the docker-compose-wikijs-0 pod.

Backup Wiki.js SQL DB

Now with the container running under TrueNAS, we can backup the database on the old HC2. For this we connect via SSH and run the following command:

docker exec db pg_dump wiki -U wikijs -F c > wikibackup.dump

Let's use scp to copy the db dump to the HC2

scp wikibackup.dump root@192.168.2.2:~/wikibackup.dump

Restore Wiki.js SQL DB backup

Restoring the backup is a bit tricky. Remember the actual docker containers are running in the docker-compose-wikijs-0 pod. So we first need to transfer the dump file to the pod:

k3s kubectl cp wikibackup.dump ix-docker-compose-wikijs/docker-compose-wikijs-0:/tmp

Then we exec into the pod and cd to /tmp directory where we stored the dump file:

k3s kubectl exec -it -n ix-docker-compose-wikijs docker-compose-wikijs-0 /bin/bash
cd /tmp

Check the running containers:

docker ps -a

First we stop the wikijs container:

docker stop mnt-wiki-1

In the next steps we will first delete (drop) the old Wiki.js database, create a new one and then restore our dump:

docker exec -it mnt-db-1 dropdb -U wikijs wikidocker exec -it mnt-db-1 createdb -U wikijs wikicat /tmp/wikibackup.dump | docker exec -i mnt-db-1 pg_restore -U wikijs -d wiki

And finally start our wikijs container:

docker start mnt-wiki-1

All done. We can access it under the port 8081. In my case that is:

http://192.168.2.2:8081

Scheduled backup from TrueNAS to USB disk

Once I had completed the migration to TrueNAS, I wanted to explore possible ways to sync my data back to my external USB disk. Once you plugged in the disk, you can run:

dmesg -T | grep Attached

The last entry will be your external disk

Example

[Sun Dec  4 14:46:49 2022] sd 3:0:0:0: [sdd] Attached SCSI disk

I formatted the disk with NTFS and mounted it read/write. For this I added it to my /etc/fstab to make sure it is automatically mounted every time the storage reboots:

echo "UUID=$(blkid -s UUID -o value /dev/sdd1)  /mnt/usb      fuseblk    defaults,errors=remount-ro 0       1" >> /etc/fstab

We can verify this by first umounting the disk:

umount /mnt/usb

And then run:

mount -a

This command will mount all entries from fstab. And voila, the disk was mounted just fine.

Example

root@truenas[/var/log/jobs]# mount | grep sdd/dev/sdd1 on /mnt/usb type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096)

This time I wanted to check out the replication options in the WebUI under Data Protection. TrueNAS offers several ways to protect your data. You can create ZFS snapshots, replicate those or use rsync. In my case I went with rsync because I want the data on my USB disk to be mountable (readable) under Windows too. I configured it as follows:

It is important to select SSH. The job is actually meant to be used to rsync to another host, but we can simply point to our TrueNAS host. In my case i only had to create an ssh keypair:

ssh-keygen -t rsa -b 4096

copy the pubkey

ssh-copy-id root@192.168.2.2

and it was ready to go.

If you do get errors during the rsync task, you can check out the job logs in the following directory:

/var/log/jobs

In my case the job kept failing and the WebUI is not really telling you what the actual problem is. So I looked in the latest log file and saw that it was failing the SSH handshake, because I had changed the IP of my TrueNAS host afterwards. I simply deleted the known_hosts file:

rm -rf ~/.ss/known_hosts

connected once more via ssh and accepted the key and all was fine

ssh 192.168.2.2

After that the rsync task was running fine. Since there is no real progress indicator in the WebUI, I just went ahead and ran:

watch -n2 df -h /mnt/usb

to get a clue on how fast or slow the sync is performing. After 35 minutes it had transferred only 31GB. My sync was running very slow only at ~15MB/s. Digging furher I noticed there are multiple rsync processes running. To kill the rsync task I ran:

systemctl restart middlewared

There was no way to cancel the task in the WebUI, maybe because I started it manually? After this I first disabled the task from the WebUI and then ran a manual rsync to see if the performance would improve:

rsync -av /mnt/zpool0 /mnt/usb/backup/

In a second ssh session I ran top to see what is eating up my resources and I noticed there are a couple of python processes that were producing a very high load. I guess these are caused by the TrueCharts catalog that I added earlier to run my containers. Something that definitely needs investigation since it has a huge impact on the overall performance and this could also be the root cause of my very slow rsync job. There is also a lot of activity regarding this on the TrueNAS forum:

"https://www.truenas.com/community/threads/truecharts-high-cpu-utilization-when-cataloging.99300/page-2"

Unfortunately with no real solution yet. I will keep an eye out for this and for now simply removed the catalog. After removing it, my transferspeed went up to 90MB/s.

Over all I have to say the migration was pretty smooth. In one of my next posts I will show you how I migrated my Grafana setup.

Sources

Transfer Wiki.js between servers
How to migrate your installation to a new server