Tag: synology

  • Home Lab – No More iSCSI – Backup Plans

    This post is part of a short series on migrating my home hypervisor off of iSCSI.

    It is worth nothing (and quite ironic) that I went through a fire drill last week when I crashed my RKE clusters. That event gave me some fresh eyes into the data that is important to me.

    How much redundancy do I need?

    I have been relying primarily on the redundancy of the Synology for a bit too long. The volume has the capability to lose a disk and the Synology has been very stable, but that does not mean I should leave things as they are.

    There are many layers of redundancy, and for a home lab, it is about making decisions as to how much you are willing to pay and what you are willing to lose.

    No Copy, Onsite Copy, Offsite Copy

    I prefer not to spend a ton of time thinking about all of this, so I created three “buckets” for data priority:

    • No Backup: Synology redundancy is sufficient. If I lose it, I lose it.
    • Onsite Copy: Create another copy of the data somewhere at home. For this, I am going to attach a USB enclosure with a 2TB disk to my Synology and setup USB copy tasks on the Diskstation Manager (DSM).
    • Offsite Copy: Ship the data offsite for safety. I have been using Backblaze B2 buckets and the DSM’s Cloud Sync for personal documents for years, but the time has come to scale up a bit.

    It is worth noting that some things may be bucketed into both Onsite and Offsite, depending on how critically I need the data. With the inventory I took over the last few weekds, I had some decisions to make.

    • Domain Controllers -> OnSite copy for sure. I am not yet sure if I want to add an Offsite copy, though: The domain doesn’t have enough on it that it cannot be rebuilt quickly, and there are really only a handful of machines on it. It just makes managing Windows Servers much easier.
    • Kubernetes NFS Data -> I use nfs-subdir-external-provisioner to provide persistent storage for my Kubernetes clusters. I will certainly do OnSite copies of this data, but for the most important ones (such as this blog), I will also setup an offsite transfer.
    • SQL Server Data -> The SQL Server data is being stored on an iSCSI LUN, but I configured regular backups to go to a file share on the Synology. From there, OnSite backups should be sufficient.
    • Personal Stuff -> I have a lot of personal data (photos, financial data, etc.) stored on the Synology. That data is already encrypted and sent to Backblaze, but I may add another layer of redundancy and do an Onsite copy of them as well.

    Solutioning

    Honestly, I thought this would be harder, but Synology’s DSM and available packages really made it easy.

    1. VM Backups with Active Backup for Business: Installed Active Backup for Business, setup a connection to my Hyper-V server, picked the machines I wanted to backup…. It really was that simple. I should test a recovery, but on a test VM.
    2. Onsite Copies with USB Copy: I plugged an external HD into the Synology, which was immediately recognized and a file share created. I installed the USB Copy package and started configuring tasks. Basically, I can setup copy tasks to move data from the Synology to the USB as desired, and includes various settings, such as incremental or versioned backups, triggers, and file filters.
    3. SQL Backups: I had to refresh my memory on scheduling SQL backups in SQL Server. Once I had that done, I just made sure to back them up to a share on the Synology. From there, USB Copy took care of the rest.
    4. Offsite: As I mentioned, I have had Cloud Sync running to Backblaze B2 buckets for a while. All I did was expand my copying. Cloud Sync offers some of the same flexibility as USB Copy, but having well-structured file shares for your data makes it easier to select and push data as you want it.

    Results and What’s next

    My home lab refresh took me about 2 weeks, albeit during a few evenings across that time span. What I am left with is a much more performant server. While I still store data on the Synology via NFS and iSCSi, it’s only smaller parts that are less reliant on fast access. The VM disks live on an SSD RAID array on the server, which gives me added stability and less thrashing of the Synology and its SSD cache. This is no more evident than the fact that my average daily SSD temp has gone down 12°F over the last 2 weeks.

    What’s next? I will be taking a look at alternatives to Rancher Kubernetes Engine. I am hoping to find something a bit more stable and secure to manage.

  • Home Lab – No More iSCSI: Transfer, Shutdown, and Rebuild

    This post is part of a short series on migrating my home hypervisor off of iSCSI.

    • Home Lab – No More iSCSI: Prep and Planning
    • Home Lab – No More iSCSI: Transfer, Shutdown, and Rebuild (this post)
    • Home Lab – No More iSCSI: Backup plans (coming soon)

    Observations – Migrating Servers

    The focus of my hobby time over the few days has been moving production assets to the temporary server. Most of it is fairly vanilla, but I have a few observations worth noting.

    • I forgot how easy it was to replicate and failover VMs with Hyper-V. Sure, I could have tried a live migration, but creating a replica, shutting down the machine, and failing over was painless.
    • Do not forget to provision an external virtual switch on your Hyper-V servers. Yes, sounds stupid, but, I dove right in to setting the temporary server up as a replication server, and upon trying to failover, realized that the machine on the new server did not have a network connection.
    • I moved my Minio instance to the Synology: I originally had my Minio server running on an Ubuntu VM on my hypervisor, but decided moving the storage application closer to the storage medium was generally a good idea.
    • For my Kubernetes nodes, it was easier to provision new nodes on the temp server than it was to do a live migration or planned failover. I followed my normal process for provisioning new nodes and decommissioning old ones, and viola, my production cluster is on the temporary server. I will simply reverse the process for the transfer back.
    • I am getting noticeably better performance on the temporary server, which has far less compute and RAM, but the VMs are on local disks. While the Synology has been a rock solid, I think I have been throwing too much at it, and it can slow down from time to time.

    Let me be clear: My network storage is by no means bad, and it will be utilized. But storing the primary vhdx files for my VMs on the hypervisor provides much better performance.

    Shut It Down!

    After successfully moving my production assets over to the temporary server, it was time to shut it down. I shut down the VMs that remained on the original hypervisor and attempted to copy the VMs to a network drive on the Synology. That was a giant mistake.

    Those VM files already live on the Synology as part of an iSCSI volume. By trying to pull those files off of the iSCSI drive and copy them back to the Synology, I was basically doing a huge file copy (like, 600+ GB huge) without the systems really knowing it was copy. As you can imagine, the performance was terrible.

    I found a 600TB SAS drive that I was able to plug into the old hypervisor, and I used that as a temporary location for the copy. Even with that change, the copy took a while (I think about 3 hours).

    Upgrade and Install

    I mounted my new SSDs (Samsung EVO 1TB) in some drive trays and plugged them into the server. A quick boot to the Smart Storage administrator let me setup a new drive array. While I thought about just using RAID 0 and letting me have 2 TB of stuff, I went the safe option and used RAID 1.

    Having configured the temporary server with Windows Server Hyper-V 2019, the process of doing it again was, well, pretty standard. I booted to the USB stick I created earlier for Hyper-V 2019 and went through the paces. My domain controller was still live (thanks temporary server!), so I was able to add the machine to domain and then perform all of the management via the Server Manager tool on my laptop.

    Moving back in

    I have the server back up with a nice new 1TB drive for my VMs. That’s a far cry from the 4 TB of storage I had allocated on the SAN target on the Synology, so I have to be more careful with my storage.

    Now, if I set a Hyper-V disk to, say, 100Gb, Hyper-V does not actually provision a file that is 100Gb: the vhdx file grows with time. But that does not mean I should just mindlessly provision disk space on my VMs.

    For my Kubernetes nodes, looking at my usage, 50GB is more than enough for those disks. All persistent storage for those workloads is handled by an NFS provisioner which configures shares on the Synology. As for the domain controllers, I am able to run with minimal storage because, well, it is a tiny domain.

    The problem children are Minio and my SQL Server Databases. Minio I covered above, moving it to the Synology directly. SQL Server, however, is a different animal.

    Why be you, when you can be new!

    I already had my production SQL instance running on another server. Rather than move it around and then mess with storage, I felt the safer solution was to provision a new SQL Server instance and migrate my databases. I only have 4 databases on that server, so moving databases is not a monumental task.

    A new server affords me two things:

    1. Latest and greatest version of Windows and SQL server.
    2. Minimal storage on the hypervisor disk itself. I provisioned only about 80 GB for the main virtual disk. This worked fine, except that I ran into a storage compatibility issue that needed a small workaround.

    SMB 3.0, but only certain ones

    My original intent was to create a virtual disk on a network share on the Synology, and mount that disk to the new SQL Server VM. That way, to the SQL Server, the storage is local, but the SQL data would be on the Synology.

    Hyper-V did not like this. I was able to create a vhdx file on a share just fine, but when I tried to add it to a VM using Add-VMHardDiskDrive, I got the following error:

    Remote SMB share does not support resiliency.

    A quick Google search turned up this Spiceworks question, where the only answer suggests that the Synology SMB 3.0 implementation is Linux-based, where Hyper-V is looking to use the Windows-based implementation, and that there are things missing in Linux.

    While I am usually not one to take one answer and call it fact, I also didn’t want to spend too much time getting into the nitty gritty. I knew it was a possibility that this wasn’t going to work, and, in the interest of time, I went back to my old pal iSCSI. I provisioned a small iSCSI LUN (300 GB) and mounted directly in the virtual machine. So now my SQL Server has a data drive that uses the Synology for storage.

    And we’re back!

    Moves like this provide an opportunity for consolidation, updates, and improvements, and I seized some of those opportunities:

    • I provisioned new Active Directory Domain controllers on updated operating systems, switched over, and deleted the old one.
    • I moved Minio to my Synology, and moved Hashicorp Vault to my Kubernetes cluster (using Minio as a storage backend). This removed 2 virtual machines from the hypervisor.
    • I provisioned a new SQL Server and migrated my production databases to it.
    • Compared to the rats nest of network configuration I had, the networking on the hypervisor is much simpler:
      • 1 standard NIC with a static IP so that I can get in and out of the hypervisor itself.
      • 1 teamed NIC with a static IP attached to the Hyper-V Virtual Switch.
    • For the moment, I did not bring back my “non-production” cluster. It was only running test/stage environments of some of my home projects. For the time being, I will most likely move these workloads to my internal cluster.

    I was able to shut down the temporary server, meaning, at least in my mind, I am back to where I was. However, now that I have things on the hypervisor itself, my next step is to ensure I am appropriately backing things up. I will finish this series with a post on my backup configuration.

  • Installing Minio on a Synology Diskstation with Nginx SSL

    In an effort to get rid of a virtual machine on my hypervisor, I wanted to move my Minio instance to my Synology. Keeping the storage interface close to the storage container helps with latency and is, well, one less thing I have to worry about in my home lab.

    There are a few guides out there for installing Minio on a Synology. Jaroensak Yodkantha walks you through the full process of setting up the Synology and Minio using a docker command line. The folks over at BackupAssist show you how to configure Minio through the Diskstation Manager web portal. I used the BackupAssist article to get myself started, but found myself tweaking the setup because I want to have SSL communication available through my Nginx reverse proxy.

    The Basics

    Prep Work

    I went in to the Shared Folder section of the DSM control panel and created a new shared folder called minio. The settings on this share are pretty much up to you, but I did this so that all of my Minio data was in a known location.

    Within the minio folder, I created a data folder and a blank text file called minio. Inside the minio file, I setup my minio configuration:

    # MINIO_ROOT_USER and MINIO_ROOT_PASSWORD sets the root account for the MinIO server.
    # This user has unrestricted permissions to perform S3 and administrative API operations on any resource in the deployment.
    # Omit to use the default values 'minioadmin:minioadmin'.
    # MinIO recommends setting non-default values as a best practice, regardless of environment
    
    MINIO_ROOT_USER=myadmin
    MINIO_ROOT_PASSWORD=myadminpassword
    
    # MINIO_VOLUMES sets the storage volume or path to use for the MinIO server.
    
    MINIO_VOLUMES="/mnt/data"
    
    # MINIO_SERVER_URL sets the hostname of the local machine for use with the MinIO Server
    # MinIO assumes your network control plane can correctly resolve this hostname to the local machine
    
    # Uncomment the following line and replace the value with the correct hostname for the local machine.
    
    MINIO_SERVER_URL="https://s3.mattsdatacenter.net"
    MINIO_BROWSER_REDIRECT_URL="https://storage.mattsdatacenter.net"

    It is worth noting the URLs: I want to put this system behind my Nginx reverse proxy and let it do SSL termination, and in order to do that, I found it easiest to use two domains: one for the API and one for the Console. I will get into more details on that later.

    Also, as always, change your admin username and password!

    Setup the Container

    Following the BackupAssist article, I installed the Docker package on to my Synology and opened it up. From the Registry menu, I searched for minio and found the minio/minio image:

    Click on the row to highlight it, and click on the Download button. You will be prompted for the label to download, I chose latest. Once the image is downloaded (you can check the Image tab for progress), go to the Container tab and click Create. This will open the Create Wizard and get you started.

    • On the Image screen, select the minio/minio:latest image.
    • On the Network screen, select the bridge network that is defaulted. If you have a custom network configuration, you may have some work here.
    • On the General Settings screen, you can name the container whatever you like. I enabled the auto-restart option to keep it running. On this screen, click on the Advanced Settings button
      • In the Environment tab, change MINIO_CONFIG_ENV_FILE to /etc/config.env
      • In the Execution Command tab, change the execution command to minio server --console-address :9090
      • Click Save to close Advanced Settings
    • On the Port Settings screen, add the following mappings:
      • Local Port 39000 -> Container Port 9000 – Type TCP
      • Local Port 39090 -> Container Port 9090 – Type TCP
    • On the Volume Settings Screen, add the following mappings:
      • Click Add File, select the minio file created above, and set the mount path to /etc/config.env
      • Click Add Folder, select the data folder created above, and set the mount path to /mnt/data

    At that point, you can view the Summary and then create the container. Once the container starts, you can access your Minio instance at http://<synology_ip_or_hostname>:39090 and log in with the password saved in your config file.

    What Just Happened?

    The above steps should have worked to create a Docker container running on Synology on your Minio. Minio has two separate ports: one for the API, and one for the Console. Reviewing Minio’s documentation, adding the --console-address parameter in the container execution is required now, and that sets the container port for the console. In our case, we set it to 9090. The API port defaults to 9000.

    However, I wanted to run on non-standard ports, so I mapped ports 39090 and 39000 to port 9090 and 9000, respectively. That means that traffic coming in on 39090 and 39000 get routed to my Minio container on ports 9090 and 9000, respectively.

    Securing traffic with Nginx

    I like the ability to have SSL communication whenever possible, even if it is just within my home network. Most systems today default to expecting SSL, and sometimes it can be hard to find that switch to let them work with insecure connections.

    I was hoping to get the console and the API behind the same domain, but with SSL, that just isn’t in the cards. So, I chose s3.mattsdatacenter.net as the domain for the API, and storage.mattsdatacenter.net as the domain for the Console. No, those aren’t the real domain names.

    With that, I added the following sites to my Nginx configuration:

    storage.mattsdatacenter.net
      map $http_upgrade $connection_upgrade {
          default Upgrade;
          ''      close;
      }
    
      server {
          server_name storage.mattsdatacenter.net;
          client_max_body_size 0;
          ignore_invalid_headers off;
          location / {
              proxy_pass http://10.0.0.23:39090;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-proto $scheme;
              proxy_set_header X-Forwarded-port $server_port;
              proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
    
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection $connection_upgrade;
    
              proxy_http_version 1.1;
              proxy_read_timeout 900s;
              proxy_buffering off;
          }
    
        listen 443 ssl; # managed by Certbot
        allow 10.0.0.0/24;
        deny all;
    
        ssl_certificate /etc/letsencrypt/live/mattsdatacenter.net/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/mattsdatacenter.net/privkey.pem; # managed by Certbot
    }
    s3.mattsdatacenter.net
      map $http_upgrade $connection_upgrade {
          default Upgrade;
          ''      close;
      }
    
      server {
          server_name s3.mattsdatacenter.net;
          client_max_body_size 0;
          ignore_invalid_headers off;
          location / {
              proxy_pass http://10.0.0.23:39000;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-proto $scheme;
              proxy_set_header X-Forwarded-port $server_port;
              proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
    
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection $connection_upgrade;
    
              proxy_http_version 1.1;
              proxy_read_timeout 900s;
              proxy_buffering off;
          }
    
        listen 443 ssl; # managed by Certbot
        allow 10.0.0.0/24;
        deny all;
    
        ssl_certificate /etc/letsencrypt/live/mattsdatacenter.net/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/mattsdatacenter.net/privkey.pem; # managed by Certbot
    }

    This configuration allows me to access the API and Console via domains using SSL terminated on the proxy. Configuring Minio is pretty easy: set MINIO_BROWSER_REDIRECT_URL to the URL of your console (In my case, port 39090), and MINIO_SERVER_URL to the URL of your API (port 39000).

    This configuration allows me to address Minio for S3 in two ways:

    1. Use https://s3.mattsdatacenter.net for secure connectivity through the reverse proxy.
    2. Use http://<synology_ip_or_hostname>:39000 for insecure connectivity directly to the instance.

    I have not had the opportunity to test the performance difference between option 1 and option 2, but it is nice to have both available. For now, I will most likely lean towards the SSL path until I notice degradation in connection quality or speed.

    And, with that, my Minio instance is now running on my Diskstation, which means less VMs to manage and backup on my hypervisor.

  • Home Lab – No More iSCSI, Prep and Planning

    This post is part of a short series on migrating my home hypervisor off of iSCSI.

    • Home Lab – No More iSCSI: Prep and Planning (this post)
    • Home Lab – No More iSCSI: Shutdown and Provisioning (coming soon)
    • Home Lab – No More iSCSI: Backup plans (coming soon)

    I realized today that my home lab setup, by technology standards, is old. Sure, my overall setup has gotten some incremental upgrades, including an SSD cache for the Synology, a new Unifi Security Gateway, and some other new accessories. The base Hyper-V server, however, has remained untouched, outside of the requisite updates.

    Why no upgrades? Well, first, it is my home lab. I am a software engineer by trade, and the lab is meant for me to experiment not with operating systems or network configurations, but with application development and deployment procedures, tools, and techniques. And for that, it has worked extremely well over the last five years.

    That said, my initial setup had some flaws, and I am seeing some stability issues that I would like to correct now, before I wake up one morning with nothing working. With that in mind, I have come up with a plan.

    Setup a Temporary Server

    I am quite sure you’re thinking to yourself “What do you mean, temporary server?” Sure, I could shut everything down, copy it off the server onto the Synology, and then re-install the OS. And while this is a home lab and supposedly “throw away,” there are some things running that I consider production. For example:

    1. Unifi Controller – I do not yet have the luxury of running a Unifi Dream Machine Pro, but it is on my wish list. In the meantime, I run an instance of the Unifi controller in my “production” cluster.
    2. Home Assistant – While I am still rocking an ISY994i as an Insteon interface, I moved most of my home automation to a Home Assistant instance in the cluster.
    3. Node Red – I have been using Node-Red with the Home Assistant palette to script my automations.
    4. Windows Domain Controller – I am still rocking a Windows Domain at home. It is the easiest way to manage the Hypervisor, as I am using the”headless” version of Windows Hyper-V Server 2019.
    5. Mattgerega.com – Yup, this site runs in my cluster.

    Thankfully, my colleague Justin happened to have an old server lying around that he has not powered on in a while, and has graciously allowed me to borrow it so that I can transfer my production assets over and keep things going.

    We’re gonna change the way we run…

    My initial setup put the bulk of my storage on the Synology via iSCSI, so much so that I had to put an SSD cache in the Synology just to keep up. At the beginning, that made sense. I was running mostly Windows VMs, and my vital data was stored on the VM itself. I did not have a suitable backup plan, so having all that data on the Synology meant I had at least some drive redundancy.

    Times have changed. My primary mechanism for running applications is via Kubernetes clusters. Those nodes typically contain no data at all, as I use an NFS provisioner and storage class to create persistent storage volumes via NFS on the Synology. And while I still have a few VMs with data on them that will need backed up, I really want to get away from iSCSI.

    The server I have, an old HP Proliant DL380 Gen8, has 8 2.5″ drive bays. My original impression was that I needed to buy SAS drives for it, but Justin said he has had luck running SATA SSDs in his.

    Requirements

    Even with a home lab move, it is always good to have some clear requirements.

    1. Upgrade my Hyper-V Server to 2019.
    2. No more iSCSI disks on the server: Rely on NFS and proper backup procedures.
    3. Fix my networking: I had originally teamed 4 of the 6 NIC ports on the server together. While I may still do that, I need to clean up that implementation, as I have learned a lot in the last few years.
    4. Keep is simple.

    Could I explore VMWare or Proxmox? I could, but, frankly, I want to learn more about Kubernetes and how I can use it in application architecture to speed delivery and reliability. I do not really care what the virtualization technology is, as long as I can run Kubernetes. Additionally, I have a LOT of automation around building Hyper-V machines, I do not want to do it again.

    Since this is my home lab, I do not have a lot of time to burn on it, hence the KISS methods. Switching virtualization stacks means more time converting images. Going from Hyper-V to Hyper-V means, for production VMs, I can setup replication and just move them to the temp server and back again.

    Prior Proper Planning

    With my requirements set, I created a plan:

    • Configure the temporary server and get production servers moved. This includes consolidating “production” databases into a single DB server, which is a matter of moving one or two DBs.
    • Shut down all other VMs and copy them over to a fileshare on the Synology.
    • Fresh installation of Windows Server 2019 Hyper-V.
    • Add 2 1TB SSDs into the hypervisor in a RAID 1 array.
    • Replicate the VMs from the temporary server to the new hypervisor.
    • Copy the rest of the VMs to the new server and start them up.
    • Create some backup procedures for data stored on the hypervisor (i.e., if it is on a VM’s drive, it needs put on the Synology somewhere)
    • Delete my iSCSI LUN from the Synology.

    So, what’s done?

    I am, quite literally, still on step one. I got the temporary server up and running with replication, and I am starting to move production images. Once my temporary production environment is running, I will get started on the new server. I will post some highlights of that process in the days to come.

  • Getting Synology SNMP data into Prometheus

    With my new cameras installed, I have been spending a lot more time in the Diskstation Manager (DSM). I always forget how much actually goes on within the Synology, and I am reminded of that every time I open the Resource Monitor.

    At some point, I started to wonder whether or not I could get this data into my metrics stack. A quick Google search brought me to the SNMP Exporter and the work Jinwoong Kim has done to generate a configuration file based on the Synology MIBs.

    It took a bit of trial and error to get the configuration into the chart, mostly because I forgot how to make a multiline string in a Go template. I used the prometheus-snmp-exporter chart as a base, and built out a chart in my GitOps repo to have this deployed to my internal monitoring cluster.

    After grabbing one of the community Grafana dashboards, this is what I got:

    Security…. Not just yet

    Apparently, SNMP has a few different versions. v1 and v2 are insecure, utilizing a community string for simple authentication. v3 has added a more complex authentication and encryption, but this makes configuration more complex.

    Now, my original intent was to enable SNMPv3 on the Synology, however, I quickly realized that meant putting the auth values for SNMP v3 in my public configuration repo, as I haven’t figured out how to include that configuration as a secret (instead of a configmap). So, just to get things running, I settled on just enabling V1/V2 with the community string. These SNMP ports are blocked from external traffic, so I’m not too concerned with it, but my to-do list now includes a migration to SNMPv3 for the Synology.

    When you have a hammer…

    … everything looks like a nail. The SNMP Exporter I am currently running only has configuration for Synology, but the default if_mib module that is packaged with the SNMP Exporter can grab data from a number of products which support it. So I find myself looking for “nails” as it were, or objects from which I can gather SNMP data. For the time being, I am content with the Synology data, but don’t be surprised if my network switch and server find themselves on the end of SNMP collection. I would say that my Unifi devices warrant it, but the Unifi Poller I have running takes care of gathering that data.

  • Camera 1, Camera 2…

    First: I hate using acronyms without definitions:

    • NAS – Network Attached Storage. Think “network hard drive”
    • SAN – Storage Area Network – Think “a network of hard drives”. Backblaze explains the differences nicely.
    • iSCSI Target – iSCSI is a way to mount a volume on a NAS or SAN to a server in a way that the server thinks the volume is local.

    When I purchased my Synology DS1517+ in February 2018, I was immediately impressed with the capabilities of the NAS. While the original purpose was to have redundant storage for photos and an iSCSI target for my home lab server, the Synology has quickly expanded its role in my home network.

    One day, as I was browsing the different DiskStation apps, Surveillance Station caught my eye. Prior to this, I had never really thought about getting a video surveillance system. Why?

    • I never liked the idea of one of the closed network systems, and the expense of that dedicated system made it tough to swallow.
    • I really did not like the idea of my video being “in the cloud.” Perhaps that is more paranoia talking, but when I am recording everything that happens around my house, I want a smaller threat surface area.

    So what changed my mind? After shelling out the coin for my Synology and the associated drives, I felt the need to get my money’s worth out of the system. Plus, the addition of a pool in the back yard makes me want to have some surveillance on it for liability purposes.

    Years in the making

    Let’s just say I had no urgency with this project. After purchasing the NAS, a bevy of personal and professional events came up, including, but not limited to, a divorce, a global pandemic, wedding planning (within a global pandemic), and home remodeling.

    When those events started to subside in late 2021, I started shopping different Synology-compatible cameras. As you can see, the list is extensive and not the easiest shopping list. So I defaulted to my personal technical guru (thanks Justin Lauck), who basically runs his own small business at home in addition to his day job, for his recommendations. He turned me on to these Reolink Outdoor Cameras. I pulled the trigger on those bullet cameras (see what I did there) in December of 2021.

    I had no desire to climb a ladder outside in the winter in Pittsburgh, so I spent some time in the winter preparing the wire runs inside. I already had an ethernet run to the second floor for one of my wireless access points, but decided it would be easiest to replace that run with two new lines of fresh Cat6. One went to the AP, the other to a small POE switch I put in the attic.

    The spring and summer brought some more major life events (just a high school graduation and a kid starting college… nothing major). The one day I tried to start, I realized that my ladder was not tall enough to get where I wanted to go. That, coupled with me absolutely not wanting to be 30′ up in the air, led me to delay a bit.

    Face your fears!

    Faced with potentially one of the last good weather weeks of the year, yesterday was as good a time as any to get a ladder and get to work. I rented a 32′ extension ladder from my local Sunbelt rentals and got to work. For whatever reason, I started with the highest point. The process was simple, albeit with a lot of “up and down” the ladder.

    1. Drill a small hole in the soffit, and feed my fish stick into the attic.
    2. Get up into the attic and find the fish stick.
    3. Tape the ethernet cable to the end of the stick.
    4. Get back up on the ladder and fish the cable outside.
    5. Install the camera
    6. Check the Reolink app on my phone (30′ up) to ensure the camera angle is as I want it to be.

    Multiple that process times 4 cameras (one on each corner of the house), and I’m done! Well, done with the hardware install.

    Setting up Surveillance Station

    After the cameras were installed and running, I started adding the cameras to Surveillance Station. The NAS comes with a license for two cameras, meaning I could only add two of my four cameras initially. I thought I could just purchase a digital code for a 4 pack, but realized that all the purchases send a physical card, meaning I have to wait for someone to ship me a card. Since I had some loyalty points at work, I ordered some Amazon gift cards which I’ll use towards my 4 camara license pack. So, for now, I only have two cameras in Surveillance Station.

    There are so many options for setup and configuration that for me to try to cover them all here would not do it justice. What I will say is that initial configuration was a breeze, and if you take the time to get one camera configured to your liking, you can easily copy settings from that camera to other cameras.

    As for settings, here’s just a sampling of what you can do:

    • Video Stream format: If your camera is compatible with Surveillance Station, you can assign streams from your camera to profiles in Surveillance Station. My Reolink’s support a low-quality stream (640×480) and a high-quality stream (2560×1440), and I can map those to different Surveillance Station profiles to change the quality of recording. For example, you can setup the Surveillance Station to record at low quality when no motion is detected, and high quality when motion is detected.
    • Recording Schedule: You can customize the recording schedule based on time of day.
    • On Screen Display Settings – You can control overlays (like date/time and camera name) to use the camera settings or via Surveillance Station. The latter is nice in that it allows you to easily copy settings across cameras.
    • Event Detection – Like On Screen Display, you can use the camera’s built-in event detection algorithm, or use Surveillance Station’s algorithm. This one I may test more: I have to assume that Surveillance Station would use more CPU on the Synology when this is set to use the Surveillance Station algorithm, as opposed to letting the camera hardware detect motion. For now, I am using the camera’s built-in algorithms.

    As I said, there are lots of additional features, including notifications, time lapse recording, and privacy screening. All in all, Surveillance Status turns your Synology into a fully functioning network video recorder.

    Going mobile

    It’s worth noting that the DS Cam app allows you to access your surveillance station video outside of your home, as well as register for push notifications to be sent to you phone for various events. The app itself is pretty easy to use, and since it uses the built-in Diskstation authentication, there is no need to duplicate user values across the systems.

    Future Expansion

    Once I get the necessary license for the other two cameras, I plan on letting the current setup ride to see what I get. However, I do have a plan for two more cameras:

    1. Inside my garage facing the garage door. This would allow me to catch anyone coming in via the garage, which, right now, I only get as a side view.
    2. On my shed facing the back of my home. This would give me a full view of the back of the house. This one, though, comes with a power issue that may end up getting solved with a small off-grid solar solution.

    Additionally, I am going to research how this can tie in with my home automation platform. It might be nice to have lights kick on when motion is detected outside.

    For now, though, I will be happy knowing that I am more closely monitoring what goes on outside my home.