Tag: raspberry pi

  • Updated Site Monitoring

    What seemed like forever ago, I put together a small project for simple site monitoring. My md-to-conf work enhanced my Python skills, and I thought it would be a good time to update the monitoring project.

    Housekeeping!

    First things first: I transferred the repository from my personal GitHub account to the spydersoft-consulting organization. Why? Separation of concerns, mostly. Since I fork open source repositories into my personal, I do not want the open source projects I am publishing to be mixed in with those forks.

    After that, I went through the process of converting my source to a package with GitHub Actions to build and publish to PyPi.org. I also added testing, formatting, and linting, copying settings and actions from the md_to_conf project.

    Oh, SonarQube

    Adding the linting with SonarQube added a LOT of new warnings and errors. Everything from long lines to bad variable names. Since my build process does not succeed if those types of things are found, I went through the process of fixing all those warnings.

    The variable naming ones were a little difficult, as some of my classes mapped to the configuration file serialization. That meant that I had to change my configuration files as well as the code. I went through a few iterations, as I missed some.

    I also had to add a few tests, just so that the tests and coverage scripts get run. Could I have omitted the tests entirely? Sure. But a few tests to read some sample configuration files never hurt anyone.

    Complete!

    I got everything renamed and building pretty quickly, and added my PyPi.org API token to the repository for the actions. I quickly provisioned a new analysis project in SonarCloud, and merged everything into main. Created a new GitHub release, which triggered a new publish to PyPi.org.

    Setting up the Raspberry Pi

    The last step was to get rid of the code on the Raspberry Pi, and use pip to install the package. This was relatively easy, with a few caveats.

    1. Use pip3 install instead of pip – Forgot the old Pi has both Python 2 and 3 installed.
    2. Fix the config files – I had to change my configuration file to reflect the variable name changes.
    3. Change the cron job – This one needs a little more explanation

    For the last one, when changing the cron job, I had to point specifically to /usr/local/bin/pi-monitor, since that’s where pip installed it. My new cron job looks like this:

    SHELL=/bin/bash
    
    */5 * * * * pi cd /home/pi && /usr/local/bin/pi-monitor -c monitor.config.json 2>&1 | /usr/bin/logger -t PIMONITOR

    That runs the application and logs everything to syslog with the PIMONITOR tag.

    Did this take longer than I expected? Yea, a little. Is it nice to have another open source project in my portfolio. Absolutely. Check it out if you are interested!

  • ISY and the magic network gnomes

    For nearly 2 years, I struggled mightily with communication issues between my ISY 994i and some of my docker images and servers. So much, in fact, that I had a fairly long running post in the Universal Devices forums dedicated to the topic.

    I figure it is worth a bit of a rehash here, if only to raise the issue in the hopes that some of my more network-experienced contacts can suggest a fix.

    The Beginning

    The initial post was essentially about my ASP.NET Core API (.net Core 2.2 at the time) not being able to communicate with the ISY’s REST API. You can read through the initial post for details, but, basically, it would hit it once, then timeout on subsequent requests.

    It would seem that some time between my original post and the administrator’s reply, I set the container’s networking to host and the problem went away.

    In retrospect, I had not been heavily using that API anyway, so it may have just been hidden a bit better by the host network. In any case, I ignored it for a year.

    The Return

    About twenty (that’s right, 20) months later, I started moving my stuff to Kubernetes, and the issue reared its ugly head. I spent a lot of time trying to get some debug information from the ISY, which only confused me more.

    As I dug more into when it was happening, it occurred to me that I could not reliably communicate with the ISY from any of the VMs on my HP Proliant server. Also, and, more puzzling, I could not do a port 80 retrieval from the server itself to the ISY. Oddly, though, I’m able to communicate with other hardware devices on the network (such as my MiLight Gateway) from the server and it’s VMs. Additionally, the ISY responds to pings, so it is able to be reached.

    Time for a new proxy

    Now, one of the VMs on my server was an Ubuntu VM that was serving as an NGINX reverse proxy. For various reasons, I wanted to move that from a virtual machine to a physical box. This, it would seem, would be a good time to see if a new proxy leads to different results.

    I had an old Raspberry Pi 3B+ lying around, and that seemed like the perfect candidate for a stand alone proxy. So I did a quick image of an SD card with Ubuntu 20, copied my Nginx configuration files from the VM to the Pi, and re-routed my firewall traffic to the proxy.

    Not only did that work, but it solved the issue of ISY connectivity. Routing traffic through the PI, I am able to communicate with the ISY reliably from my server, all of my VMs, and other PCs on the network.

    But, why?

    Well, that is the million dollar question, and, frankly, I have no clue. Perhaps it has to do with the NIC teaming on the server, or some oddity in the network configuration on the server. But I burned way too many hours on it to want to dig more into it.

    You may be asking, why a hardware proxy? I liked the reliability and smaller footprint of a dedicated Raspberry PI proxy, external to the server and any VMs. It made the networking diagram much simpler, as traffic now flows neatly from my gateway to the proxy and then to the target machine. It also allows me to control traffic to the server in a more granular fashion, rather than having ALL traffic pointed to a VM on the server, and then routed via proxy from there.

  • Making use of my office television with Magic Mirror

    I have a wall-mounted television in my office that, 99% of the time, sits idle. Sadly, the fully loaded RetroPie attached to it doesn’t get much Super Mario Bros action during the workday. But that idle Raspberry Pi had me thinking of ways to utilize that extra screen in my office. Since, well, 4 monitors is not enough.

    Magic Mirror

    At first, I contemplated writing my own application in .Net 5. But I really do not have the kind of time it would take to get something like that moving, and it seems counter-productive. I wanted something quick and easy, with no necessary input interface (it is a television, after all), and capable of displaying feed data quickly. That is when I stumbled on Magic Mirror 2.

    Magic Mirror is a Node-based app which uses Electron to display HTML on various platforms, including on the Raspberry Pi. It is popular, well-documented, and modular. It has default modules for weather and clock, as well as a third-party module for package tracking.

    Server Status… Yes please.

    There are several modules around displaying status, but nothing I saw that let me display the status from my statuspage.io page. And since statuspage.io has a public API, unique to each page, that doesn’t require an API key, it felt like a good first module to develop for myself.

    I spent a lot of time understanding the module aspect of MagicMirror, but, in the end, I have a pretty simplistic module that’ll display my statuspage.io status on the magic mirror.

    What is next?

    Well, there are two things I would really like to try:

    1. Work on integrating a floor plan of my house, complete with status from my home automation.
    2. Re-work MagicMirror as a React app.

    For the first one, well, there are some existing third-party components that I will test. That second one, well, that seems a slightly taller task. That might have to be a winter project.