The State of Synology
Synology and I first crossed paths around 2014 or 2015. Even without having laid eyes on its operating system, I was enamored by its sleek form factor. At that time, I didn't have space for a 42U server rack at home, so it seemed like a flawless solution for data storage. However, fresh into the workforce, newly married, and saving for a house meant a Synology device, along with hard drives, was a distant dream. Thus, my journey around this tech took a detour. I worked my way through tons of random hardware over the next few years. I'll spare the details of the several machines that came and went along with a 42U server rack that I really didn't need (or have room for). An important thing to note here though, was that in the couple of years leading up to 2018, I went from running several servers and VMs into just needing one server with Docker on it. So it was time to finally pony up the dough and get myself a Synology.
I began with a DS918+, and I couldn't have been more thrilled. If I recall correctly, it was the summer of 2018 when I bought it. I had spent a great deal of time working on my friend’s DS1815+, so I knew from the moment it arrived how I wanted to set it up. So I filled it with 4x 2TB WD Reds (5400RPM CMR; this was before all the SMR shenanigans took place), and my journey began. It came loaded with some amazing and useful features: DDNS, VPN server, Docker, and Let's Encrypt, not to mention their stellar suite of apps like Synology Drive, Synology Office, DS Video, DS Photo (now Synology Photos), and many more. DSM developers had truly considered all angles, and I was overjoyed by its ease of use. I was so delighted that I later upgraded to the DS1821+.
But then, our paths began to diverge.
Before upgrading to the DS1821+, tension was already mounting between DSM and I. I sought more from my Synology, but it felt like Synology was the one holding me back. At the time, I still needed more storage, and more CPU, so the upgrade to the DS1821+ was necessary. However, regardless of which device I was using, DSM is DSM, and the difficulties were the same. The friction started with SSL certificates. If you wished to use a wildcard certificate, you could only do it with Synology's pre-loaded domain, not your custom one. The reasoning behind such a limitation remains elusive to me, and it forced me to find a workaround. Eventually, I landed on the SWAG Docker container from LinuxServer.io. This solution addressed my SSL certificate issue, but it also led me to abandon Synology's built-in reverse proxy, requiring me to pen a myriad of Nginx config files. The built-in reverse proxy lacked support for advanced configurations, but it was fulfilling its role at the time, and the SWAG container demanded more work than I'd hoped for. Thankfully, I later discovered NGINX Proxy Manager, though it brought its own set of complications that I won't get into here.
NGINX Proxy Manager (NPM) works like a dream when it comes to setting up a reverse proxy and an SSL certificate (although it has some other issues that I’m not going to get into here). Yet, Synology reserves ports 80 and 443 for its own purposes. I could've chosen different ports and rerouted all port 80 and 443 requests on my router to the varied ports on the Synology, but I wasn't keen on it. After some fiddling, I shifted Synology's reserved ports from 80 and 443 to 81 and 444, respectively, and NPM started running smoothly. This needs to be done after every Synology update though, so using different ports might have been a better option.
Another small point of frustration is their LDAP server component. Quite simply, users created via the LDAP server are unable to SSH into the machine. This should be a permissions issue that can be managed by an administrator; I don't think it should be at Synology's discretion who can connect to the machine and how. If this is not deliberate, then it is clear that the LDAP server is a half-baked idea that was better off left out of their Package Center.
The most significant gripe I had was with Docker. I haven't used Synology's native Docker UI in years because it's not compatible with the features I use. Instead of ignoring those features, it disrupts the UI's functioning. Any modifications made in the UI would fail to preserve the unsupported configuration, thus disrupting my containers. To be precise, if you want anything more than using a bridge or host network, it will cause the UI to malfunction. When I sought to use one container as a VPN gateway for another container, or when I wanted to use a Macvlan network to provide a container with its own local IP address, it caused the UI to falter. The command-line interface (CLI) was my go-to for most tasks, but it would have been convenient to monitor my containers using the UI. The silver lining to Synology’s poor execution of their Docker UI is that I discovered Portainer. I've been using Portainer for the past few years now, and it's been fantastic. It is immensely more powerful than Synology's Docker UI, and it has an ease of use that I would not normally expect from such a technical application. It even resolves another significant issue Synology imposed on Docker users: API inaccessibility. By restricting access to the Docker API, Synology diminished Docker's “power user” experience. However, Portainer has its own API, providing an easy workaround if you use it to manage your containers. You may notice from my writing that I sound like a Portainer fan. Well, I am, and we’ll talk about it in more depth another day.
Using a container as a VPN gateway for another container proved more challenging than anticipated. As predicted, this configuration disrupted the Docker UI, rendering it a virtual paperweight. It failed because the TUN kernel module wasn't loaded. I had to set up a scheduled task to load the TUN kernel module on boot. And as is tradition when you want to do anything unique on a Synology NAS, it must run on boot since every time you restart your Synology, this configuration doesn't persist.
Hopefully, I've painted a clear picture. DSM7.2 was released recently, featuring a new Docker UI called "Container Manager". However, I'll never know if it works, as I use containers as network devices and use Macvlan networks. I don't want to risk damaging my configuration just to see if Synology has improved their Docker experience.
While I understand that the things I'm trying to achieve aren't supported, and I'm pushing the device's limits, what baffles me is why Synology would purposely impede my progress. A warning when you SSH into your Synology is acceptable, but why can't I just have a warning about exposing my Docker API, or that Macvlan networks are unsupported? I'm not asking Synology to adjust their platform to cater to these extended use cases, but I think they should take a step back. Even MacOS doesn't obstruct me to this extent when I want to do things it dislikes.
In conclusion, the DS1821+ will be my last Synology. As they continue to tighten the screws and veer towards vendor lock-in, the high price and limited feature set are beginning to outweigh the benefits. Many of the features and functions I wanted most in DSM appear half-baked, and are not robust enough to prevent me from having to resort to third-party applications. The thought of upgrading to a costlier, more powerful device only to face these same issues and then being compelled to buy their expensive proprietary hard drives is unimaginable. It seems like Synology is entering a weird phase where they're pricing themselves out of home labs while simultaneously nerfing their features to a point where they wouldn't work in an enterprise environment. I have already migrated all of my containers off of Synology and onto a bare metal machine that I built. I am still using the Synology for storage, and that won't change for a while, but I expect that a combination of Ubuntu Server, ZFS, and Docker (managed by Portainer of course!) are going to not only vastly outperform Synology in raw CPU performance, but the limitations in Synology somehow make managing a custom machine with Ubuntu Server the easier task.
After all, I would rather spend my time optimizing my system and progressing forward than constantly wrestling with this digital helicopter parent that keeps telling me "no"... wouldn't you?