Author: Matt

  • Hackintosh – Windows 11 Edition

    About a year ago, I upgraded my home laptop to Windows 11. I swapped out my old system drive (which was a spinner) for a new SSD, so I had to go through the process again. I ran into a few different issues this time around that are worth putting to paper.

    Windows 10 Install – Easy

    After installing the new drive, I followed the instructions to create Windows 10 bootable media. Booted from the USB, installed Windows 10. Nothing remarkable, just enough to get into the machine. After a few rounds of Windows updates, I felt like I was ready to go.

    Windows 11 Upgrade – Not so Easy

    My home laptop is running a CPU that isn’t compatible with Windows 11. That doesn’t mean I can’t run it, it just means I have to hack Windows a bit.

    In the past, I followed this guide to set the appropriate registry entry and get things installed. This time around should be no different, right?

    Wrong. I made the change, but the installer continued to crash. A little Googling took me to this post, which lead to this article about resetting Windows Update in Windows 10. After downloading and running the batch file from the instructions, I was able to install Windows 11 again.

    Done!

    After a bit of waiting, I have a Windows 11 machine running. Now time to rebuilt to my liking… Thank goodness for Chocolatey.

  • What’s in a home lab?

    A colleague asked today about my home lab configuration, and I came to the realization that I have never published a good inventory of the different software and hardware that I run as part of my home lab / home automation setup. While I have documented bits and pieces, I never pushed a full update. I will do my best to hit the highlights without boring everyone.

    Hardware

    I have a small cabinet in my basement mechanical room which contains the majority of my hardware, with some other devices sprinkled around.

    This is all a good mix of new and used stuff: Ebay was a big help. Most of it was procured over several years, including a number of partial updates to the NAS disks

    • NAS – Synology Diskstation 1517+. This is the 5-bay model. I added the M2D18 expansion card, and I currently have 5 x 4TB WD Red Drives and 2 x 1GB WD SSDs for cache. Total storage in my configuration is 14TB.
    • Server – HP ProLiant DL380p Gen8. Two Xeon E5-2660 processors, 288 GB of RAM, and two separate RAID arrays. The system array is 136GB, while the storage array is 1TB.
    • Network
      • HP ProCurve Switch 2810-24G – A 24 port GB switch that serves most of my switching needs.
      • Unifi Security Gateway – Handles all of my incoming/outgoing traffic through the modem and provides most of my high-level network capabilities.
      • Unifi Access Points – Three in total, 2 are the UAP-AC-LR models, the other is the UAP-AC-M outdoor antenna.
      • Motorola Modem – I did not need the features of the Comcast/Xfinity modem, nor did I want to lease it, so I bought a compatible modem.
    • Miscellaneous Items
      • BananaPi M5 – Runs Nginx as a reverse proxy into my network.
      • RaspberryPi 4B+ – Runs Home Assistant. This was a recent move, documented pretty heavily in a series of posts that starts here.
      • RaspberryPi Model B – That’s right, an O.G. Pi that runs my monitoring scripts to check for system status and reports to statuspage.io.
      • RaspberryPi 4B+ – Mounted behind the television in my office, this one runs a copy of MagicMirror to give me some important information at a glance.
      • RaspberryPi 3B+ – Currently dormant.

    Software

    This one is lengthy, so I broke it down into what I hope are logical and manageable categories.

    The server is running Windows Hyper-V Server 2019. Everything else, unless noted, is running on a VM on that server.

    Server VMs

    • Domain Controllers – Two Windows domain controllers (primary and secondary).
    • SQL Servers – Two SQL servers (non-production and production). It’s a home lab, so the express editions suffice.

    Kubernetes

    My activities around Kubernetes are probably the most well-documented of the bunch, but, to be complete: Three RKE2 Kubernetes clusters. Two three-node clusters and one four-node cluster to run internal, non-production, and production workloads. The nodes are Ubuntu 22.04 images with RKE2 installed.

    Management and Monitoring Tools

    For some management and observability into this system, I have a few different software suites running.

    • Unifi Controller – This makes management of the USG and Access points much easier. It is currently running in the production cluster using the jacobalberty image.
    • ArgoCD – Argo is my current GitOps operator and is used to make sure what I want deployed on my clusters is out there.
    • LGTM Stack – I have instances of Loki, Grafana, Tempo, and Mimir running in my internal cluster, acting as the target for metrics data.
    • Grafana Agent – For my VMs and other hardware that supports it, I installed Grafana Agent and configured them to report metrics and logs to Mimir/Loki.
    • Hashicorp Vault – I am running an instance of Hashicorp Vault in my clusters to provide secret management, using the External Secrets operator to provide cached secret management in Kubernetes.
    • Minio – In order to provide a local storage instance with S3 compatible APIs, I’m running Minio as a docker image directly on the Synology.

    Cluster Tools

    Using Application Sets and the Cluster generator, I configured a number of “cluster tools” which allow me to install different tools to clusters using labels and annotations on the Argo cluster Secret resource.

    This allows me to install multiple tools using the same configuration, which improves consistency. The following are configured for each cluster.

    • kube-prometheus – I use Bitnami’s kube-prometheus Helm chart to install an instance of Prometheus on each cluster. They are configured to remote-write to Mimir.
    • promtail – I use the promtail Helm chart to install an instance of Promtail on each cluster. They are configured to remote-write to Mimir.
    • External Secrets – The External Secrets operator helps bootstrap connection to a variety of external vaults and creates Kubernetes Secret resources from the ExternalSecret / ExternalClusterSecret custom resources.
    • nfs-subdir-external-provisioner – For PersistantVolumes, I use the nfs-subdir-external-provisioner and configure it to point to dedicated NFS shares on the Synology NAS. Each cluster has its own folder, making it easy to backup through the various NAS tools
    • cert-manager – While I currently have cert-manager installed as a cluster tool, if I remember correctly, this was for my testing of Linkerd, which I’ve since removed. Right now, my SSL traffic is offloaded at the reverse proxy. This has multiple benefits, not the least of which is that I was able to automate my certificate renewals in one place. Still, cert-manager is available but no certificate stores are currently configured.

    Development Tools

    It is a lab, after all.

    • Proget – I am running the free version of Proget for private Nuget and container image feeds. As I move to open source my projects, I may migrate to Github artifact storage, but for now, it is stored locally.
    • SonarQube Community – I am running an instance of SonarQube community for quality control. However, as with Proget, I have begun moving some of my open source projects to Sonarcloud.io, so this instance may fall away.

    Custom Code

    I have a few projects, mostly small APIs that allow me to automate some of my tasks. My largest “project” is my instance of Identity Server, which I use primarily to lock down my other APIs.

    And of course…

    WordPress. This site runs in my production cluster, using the Bitnami chart, which includes the database.

    And there you go…

    So that is what makes up my home lab these days. As with most good labs, things are constantly changing, but hopefully this snapshot presents a high level picture into my lab.

  • Tech Tip – Options Pattern in ASP.NET Core

    I have looked this up at least twice this year. Maybe if I write about it, it will stick with me. If it doesn’t, well, at least I can look here.

    Options Pattern

    The Options pattern is a set of interfaces that allow you to read options into classes in your ASP.NET application. This allows you to configure options classes which are strongly typed with default values and attributes for option validation. It also removes most of the “magic strings” that can come along with reading configuration settings. I will do you all a favor and not regurgitate the documentation, but rather leave a link so you can read all about the pattern.

    A Small Sample

    Let’s assume I have a small class called HostSettings to store my options:

     public class HostSettings
     {
         public const string SectionName = "HostSettings";
         public string Host { get; set; } = string.Empty;
         public int Port { get; set; } = 5000;
     }

    And my appsettings.json file looks like this:

    {
      "HostSettings": {
        "Host": "http://0.0.0.0",
        "Port": 5000
      },
      /// More settings here
    }

    Using Dependency Injection

    For whatever reason, I always seem to remember how to configure options using the dependency injector. Assuming the above, adding options to the store looks something like this:

    var builder = WebApplication.CreateBuilder(options);
    builder.Services.Configure<HostSettings>(builder.Configuration.GetSection(HostSettings.SectionName));

    From here, to get HostSettings into your class, add an IOptions<HostSettings> parameter to your class, and access the options using the IOptions.Value implementation.

    public class MyService
    {
       private readonly HostSettings _settings;
    
       public MyService(IOptions<HostSettings) options)
       {
          _settings = options.Value;
       }
    }

    Options without Dependency Injection

    What I always, always forget about is how to get options without using the DI pattern. Every time I look it up, I have that “oh, that’s right” moment.

    var hostSettings = new HostSettings();
    builder.Configuration.GetSection(HostSEttings.SectionName).Bind(hostSettings);

    Yup. That’s it. Seems silly that I forget that, but I do. Pretty much every time I need to use it.

    A Note on SectionName

    You may notice the SectionName constant that I add to the class that holds the settings. This allows me to keep the name/location of the settings in the appsettings.json file within the class itself.

    Since I only have a few classes which house these options, I load them manually. It would not be a stretch, however, to create a simple interface and use reflection to load options classes dynamically. It could even be encapsulated into a small package for distribution across applications… Perhaps an idea for an open source package.

  • SonarCloud has become my Frank’s Red Hot…

    … I put that $h!t on everything!

    A lot has been made in recent weeks about open source and its effects on all that we do in software. And while we all debate the ethics of Hashicorp’s decision to turn to a “more closed” licensing model and question the subsequent fork of their open source code, we should remember that there are companies who offer their cloud solutions free for open source projects.

    But first, Github

    Github has long been the mecca for open source developers, and even under Microsoft’s umbrella, that does not look to be slowing down. Things like CI/CD through Github Actions and Package Storage are free for public repositories. So, without paying a dime, you can store your open source code, get automatic security and version updates, build your code, and store build artifacts all in Github. All of this built on the back of a great ecosystem for pull request reviews and checks. For my open source projects, it provides great visibility into my code and puts MOST of what I want in one place.

    And then SonarQube/Cloud

    SonarSource’s SonarQube offering is a great way to get static code analysis on your code. While their community edition is missing features that require an enterprise license, their cloud offering provides free analysis of open source projects.

    With that in mind, I have started to add my open source projects to SonarCloud.io. Why? Well, first, it does give me some insight into where my code could be better, which keeps me honest. Second, on the off chance that anyone wants to contribute to my projects, the Sonar analysis will help me quickly determine the quality of the incoming code before I accept the PR.

    Configuring the SonarCloud integration with Github even provides a sonarcloud bot that reports on the quality gate for pull requests. What does that mean? It means I get a great picture of the quality of the incoming code:

    What Next?

    I have been spending a great deal of time on the Static Code Analysis side of the house, and I have been reasonably impressed with SonarQube. I have a few more public projects which will receive a SonarCloud instance, but at work, it is more about identifying the value that can come from this type of scanning.

    So, what is that value, you may ask? Enhancing and automating your quality gates is always beneficial, as it streamlines your developer work flow. It also sets expectations: Engineers know that bad/smelly code will be caught well before a pull request is merged.

    If NOTHING else, SonarQube allows you to track your testing coverage and ensuring it does not trend backwards. If we did nothing else, we should at least ensure that we continue to cover what we write new, even if those before us did not.

  • Taking my MagicMirror modules to Typescript

    It came as a bit of a shock that I have been running MagicMirror in my home office for almost two years now. I even wrote two modules, one to display Prometheus alerts and one to show Status Page status.

    In the past few years I have started to become more and more comfortable with Typescript, I wanted to see if I could convert my modules to Typescript.

    Finding an example

    As is the case with most development, the first step was to see if someone else had done it. As it turns out, a few folks have done it.

    I stumbled across Michael Scharl’s post on dev.to which covered his Typescript MagicMirror module. In the same search, I ran across a forum post by Jalibu that focused a little more on the nitty-gritty, including his contribution of the magicmirror-module in DefinitelyTyped.

    Migrating to Typescript

    Ultimately, the goal was to generate the necessary module files for MagicMirror through transpilation using Rollup (see below), but first I needed to move my code and convert it to Typescript. I created a src folder, moved my module file and node_helper into there, and changed the extension to .ts.

    From there, I split things up into a more logical configuration, utilizing Typescript as well as taking advantage of ESNext based module imports. As it would all be transpiled into Javascript, I could take advantage of the module options in Typescript to clean up my code.

    My modules already had a good amount of development packages around linting and formatting, so I updated all of those and added packages necessary for Typescript linting.

    A Note on Typing

    Originally, following Michael Scharl’s sample code, I had essentially copied the module-types.ts file from the MagicMirror repo and renamed it ModuleTypes.d.ts in my own code. I did not particularly like that method, as it required me to have extra code in my module, and I would have to update it as the MagicMirror types changed.

    Jalibu‘s addition of the @types/magicmirror-module package simplified things greatly. I installed the package and imported what I needed.

    import * as Log from "logger";
    import * as NodeHelper from "node_helper";

    The package includes a Module namespace that makes registering your module easy:

    Module.register<Config>("MMM-PrometheusAlerts", {
      // Module implementation
    }

    A few tweaks to the tsconfig.json file, and the tsc command was running!

    Using Rollup

    The way that MagicMirror is set up, the modules generally need the following:

    • Core Module File, named after the module (<modulename>.js)
    • Node Helper (node_helper.js) that represents a Node.js backend task. It is optional, but I always seem to have one.
    • CSS file, if needed. Would contain any custom styling for the HTML generated in the Core Module file.

    Michael Scharl’s post detailed his use of Rollup to create these files, however, as the post is a few years old, it required a few updates. Most of it was installing the scoped rollup packages (@rollup), but I also removed the banner plugin.

    I configured my Rollup in a ‘one to one’ fashion, mapping my core module file (src/MMM-PrometheusAlerts.ts) to its output file (MMM-PrometheusAlerts.js) and my node helper (src/node_helper.ts) to its output file (node_helper.js). Rollup would use the Typescript transpiler to generate the necessary Javascript files, bringing in any of the necessary imports.

    Taking a cue from Jalibu, I used the umd output format for node_helper, since it will be running on the backend, but iife for the core module, since it will be included in the browser.

    Miscellaneous Updates

    As I was looking at code that had not been touched in almost two years, I took the opportunity to update libraries. I also switched over to Jest for testing, as I am certainly more familiar with it, and I need the ability to mock to complete some of my tests. I also figured out how to implement a SASS compiler as part of rollup, so that I could generate my module CSS as well.

    To make things easier on anyone who might use this module, I added a postinstall script that performs the build task. This generates the necessary Javascript files for MagicMirror using Rollup.

    One down, one to go

    I converted MMM-PrometheusAlerts, but I need to convert MMM-StatusPageIo. Sadly, the latter may require some additional changes, since StatusPage added paging to their APIs and I am not yet in full compliance…. I’ve never had enough incidents that I needed to page. But it has been on my task list for a bit now, and moving to Typescript might give me the excuse I need to drop back in.

  • Tech Tip – You should probably lock that up…

    I have been running in to some odd issues with ArgoCD not updating some of my charts, despite the Git repository having an updated chart version. As it turns out, my configuration and lack of Chart.lock files seems to have been contributing to this inconsistency.

    My GitOps Setup

    I have a few repositories that I use as source repositories for Argo. The contain mix of my own resource definition files, which are raw manifest files, and external Helm charts. The external Helm charts use an umbrella chart to allow me the ability to add supporting resources (like secrets). My Grafana chart is a great example of it.

    Prior to this, I was not including the Chart.lock file in the repository. This made it easier to update the version in the Chart.yaml file without having to run a helm dependency update to update the lock file. I have been running this setup for at least a year, and I never really noticed much problem until recently. There were a few times where things would not update, but nothing systemic.

    And then it got worse

    More recently, however, I noticed that the updates weren’t taking. I saw the issue with both the Loki and Grafana charts: The version was updated, but Argo was looking at the old version.

    I tried hard refreshes on the Applications in Argo, but nothing seemed to clear that cache. I poked around in the logs and noticed that Argo runs helm dependency build, not helm dependency update. That got me thinking “What’s the difference?”

    As it turns out, build operates using the Chart.lock file if it exists, otherwise it acts like upgrade. upgrade uses the Chart.yaml file to install the latest.

    Since I was not committing my Chart.lock file, it stands to reason that somewhere in Argo there is a cached copy of a Chart.lock file that was generated by helm dependency build. Even though my Chart.yaml was updated, Argo was using the old lock file.

    Testing my hypothesis

    I committed a lock file 😂! Seriously, I ran helm dependency update locally to generate a new lock file for my Loki installation and committed it to the repository. And, even though that’s the only file that changed, like magic, Loki determined it needed an update.

    So I need to lock it up. But, why? Well, the lock file exists to ensure that subsequent builds use the exact version you specify, similar to npm and yarn. Just like npm and yarn, helm requires a command to be run to update libraries or dependencies.

    By not committing my lock file, the possibility exists that I could get a different version than I intended or, even worse, get a spoofed version of my package. The lock file maintains a level of supply chain security.

    Now what?

    Step 1 is to commit the missing lock files.

    At both work and home I have Powershell scripts and pipelines that look for potential updates to external packages and create pull requests to get those updates applied. So step 2 is to alter those scripts to run helm dependency update when the Chart.yaml is updated, which will update the Chart.lock and alleviate the issue.

    I am also going to dig into ArgoCD a little bit to see where these generated Chart.lock values could be cached. In testing, the only way around it was to delete the entire ApplicationSet, so I’m thinking that the ApplicationSet controller may be hiding some data.

  • Rollback saved my blog!

    As I was upgrading WordPress from 6.2.2 to 6.3.0, I ran into a spot of trouble. Thankfully, ArgoCD rollback was there to save me.

    It’s a minor upgrade…

    I use the Bitnami WordPress chart as the template source for Argo to deploy my blog to one of my Kubernetes clusters. Usually, an upgrade is literally 1, 2, 3:

    1. Get the latest chart version for the WordPress Bitnami chart. I have a Powershell script for that.
    2. Commit the change to my ops repo.
    3. Go into ArgoCD and hit Sync

    That last one caused some problems. Everything seemed to synchronize, but the WordPress pod stopped at the connect to database section. I tried restarting the pod, but nothing.

    Now, the old pod was still running. So, rather than mess with it, I used Argo’s rollback functionality to roll the WordPress application back to it’s previous commit.

    What happened?

    I’m not sure. You are able to upgrade WordPress from the admin panel, but, well, that comes at a potential cost: If you upgrade the database as part of the WordPress upgrade, but then you “lose” the pod, well, you lose the application upgrade but not the database upgrade, and you are left in a weird state.

    So, first, I took a backup. Then, I started poking around in trying to run an upgrade. That’s when I ran into this error:

    Unknown command "FLUSHDB"

    I use the WordPress Redis Object Cache to get that little “spring” in my step. It seemed to be failing on the FLUSHDB command. At that point, I was stuck in a state where the application code was upgraded but the database was not. So I restarted the deployment and got back to 6.2.2 for both application code and database.

    Disabling the Redis Cache

    I tried to disable the Redis plugin, and got the same FLUSHDB error. As it turns out, the default Bitnami Redis chart disables these commands, but it would seem that the WordPress plugin still wants them.

    So, I enabled the commands in my Redis instance (a quick change in the values files) and then disable the Redis Cache plugin. After that, I was able to upgrade to WordPress 6.3 through the UI.

    From THERE, I clicked Sync in ArgoCD, which brought my application pods up to 6.3 to match my database. Then I re-enabled the Redis Plugin.

    Some research ahead

    I am going to check with the maintainers of the Redis Object Cache plugin. If they are relying on commands that are disabled by default, it most likely caused some issues in my WordPress upgrade.

    For now, however, I can sleep under the warm blanket of Argo roll backs!

  • Publishing Markdown to Confluence

    For what seems like a long time now, I have been using RittmanMead’s md_to_conf project to automatically publish documentation from Markdown files into Confluence Cloud. I am on something of a documentation kick, and what started as a quick fix ultimately turned into a new project.

    It all started with the word “legacy”…

    In publishing our Markdown files, I noticed that the pages all had the legacy editor text in Confluence Cloud. I wanted to move our pages to the updated editor, and my gut reaction was “well, maybe it is because the md_to_conf project was using the v1 APIs.

    The RittmanMead project is great, but it has not changed in about a year. Now, granted, once things work, I wouldn’t expect much change.

    So I forked the project and started changing some API calls. The issue is, well, I just did not know when to stop. My object-oriented tendencies took over, and I ended up way past the point of no return:

    • Split converter and client code into separate Python classes to simplify the main module.
    • Converted the entire project to a Python module and built a wheel for simplified installation and execution.
    • Added Flake8/Black for Linting
    • Added a GitHub workflow for building and publishing.
    • Added analysis steps to analyze the code in SonarCloud.io.

    It is worth noting that, at the end of the day, the editor value was already supported in RittmanMead/md_to_conf: You just have to set the version argument to 2 when running. I found that out about an hour or so into my journey, but, by that time, I was committed.

    Making a break for it

    At this point, a few things had happened:

    1. My code had diverged greatly from the RittmanMead project.
    2. Most likely, the functionality and purpose for which the original project was written had changed.
    3. I broke support for Confluence Server.
    4. I have some plans for additional features for the module, including the ability to pull labels from the Markdown.

    With that in mind, I had GitHub “break” the fork connection between my repository and the RittmanMead repository.

    Let me be VERY clear: the README will ALWAYS reflect the source of this repository. This project would not be possible without the contributors to the RittmanMead script, and whatever happens in my respository is building on their fine work. But I have designs on a more formal package and process, as well as my own functional roadmap, so a split makes the most sense.

    Introducing md-to-conf

    With that in mind, I give you md-to-conf (PyPi / GitHub)! I will be adding some issues for feature enhancements and work on them as I can, although, my first order of business will most likely be some basic testing to make sure I don’t break anything as I work.

    If you have a desire to contribute, please see the contribution guideline and have at it!

  • You really should get that documented…

    One of the most important aspects of a software engineer/architect is to document what they have done. While it is great that they solved the problem, that solution will become a problem for someone else if the solution has not been well documented. Truthfully, I have caused my own problems when revisiting old, improperly documented code.

    Documentation Generation

    Most languages today have tools that will extract code comments and turn them into formatted content. I used mkdocs to generate documentation for my simple Python monitoring tool.

    Why generate from the code? The simple reason is, if the documentation lives within the code, then it is more likely to be updated. Requiring a developer to jump into another repository or project takes time and can be a forgotten step in development. The more automatic your documentation is, the more likely it is to get updated.

    Infrastructure as Code (IaC) is no different. Lately I have been spending some time in Terraform, and sought out a solution for documenting Terraform. terraform-docs to the rescue!

    Documenting Terraform with terraform-docs

    The Getting Started guides for terraform-docs are pretty straight forward and allow you to generate content in a variety of different targets. All I really wanted was a simple README.md file for my projects and any sub-modules, so I left the defaults pretty much as-is.

    I typically structure my repositories with a main terraform folder, and sub-modules under that if needed. Without any additional configuration, this command worked like a charm:

    terraform-docs markdown document --output-file README.md .\terraform\ --recursive

    It generated a README.md file for my main module and all sub-modules. I played a little with configuration, mostly to set up default values in the configuration YAML file so that others could run a simpler command:

    terraform-docs --output-file README.md .\terraform\ --recursive

    I will get a pre-commit hook configured in my repositories to run this command before a commit to ensure the documents are always up to date.

  • Generative AI comes to Google

    About a month ago, I signed up for Google’s generative AI offerings: SGE (generative AI in Search) and Code Tips. I was accepted to the labs a few weeks ago, and, overall, it’s been a pretty positive experience.

    SGE – Yet Another Acronym

    I don’t really have any idea what SGE means, but it is essentially the generative AI component of search. The implementation is fairly intrusive: your search if fed into the generative AI, which processes and returns answers that show at the top of the page. Longer searches require you to click a button to submit the phrase to generative AI, but short searches are always submitted.

    Why is that intrusive? Well, when the results are returned, they can take up nearly all of the screen real estate, pushing actual search results to the bottom. In most cases, I just want the search results, not the generative AI’s opinion on the matter. I would prefer having to submit the search to generative AI explicitly. What would be nicer is if there was a different submit button: type your search and hit “Submit to SGE” or something like that, at which point it will not only search, but also submit to SGE. A standard search, though, would remain the same.

    As to accuracy: It’s about what I’ve seen from other tools, but nothing that really gets me overly excited. Truthfully, Google’s search algorithm tends to return results that are more meaningful than the SGE results.

    Code Tips

    Where SGE failed, Code Tips has succeeded with flying colors. These are the types of searches that I perform on a daily basis, and the Code Tips does a pretty good job.

    For example, this search:

    powershell create object array

    returns the following suggestion from Code Tips:

    $array = @()
    $array += [pscustomobject]@{
      name = "John Doe"
      age = 30
    }
    $array += [pscustomobject]@{
      name = "Jane Doe"
      age = 25
    }

    Every Code Tip comes with a warning: use code with caution with a link to a legal page around responsible use and citations. In other words, Google is playing “CYA”.

    Code Tips works great for direct questions, but some indirect questions, like c# syntax for asynchronous lambda function, return nothing. However, with that same search string, SGE returns a pretty good synopsis of how to write an asynchronous lambda function.

    Overall Impressions

    Code Tips.. A solid B+. I would expect it to get better, but, unlike GitHub CoPilot, it’s not built into an IDE, which means I’m still defaulting to “ask the oracle” for answers.

    SGE, I can’t go above a C+. It’s a little intrusive for my taste, and right now, the answers it provides aren’t nearly as helpful as the search results that I now need to scroll to see.