Tag: api

  • D-N-S Ja!

    With all this talk of home lab cluster provisioning, you might be wondering if I am actually doing any software development at home. As a matter of fact, I am. Just because it is in support of my home lab provisioning does not mean it is not software development!

    Keeping the Lab Tidy

    One of the things that has bothered me in my home lab management is the DNS management. As I provision and remove Linux VMs, having appropriate DNS records for them makes it easy to find them. Generally it makes for a more tidy environment, as I have a list of my machines and their IPs in one place. I have a small Powershell module that uses the DnsServer module in Windows. What I wanted was an API that would allow me to manage my DNS.

    Now, taking a cue from my Hyper-V wrapper, I created a small API that uses the DnsServer module to manage DNS entries. It was fairly easy, and works quite well on my own machine, which has the DnsServer module installed because I have the Remote Server Administrative Toolset installed.

    Location, Location, Location

    When I started looking at where I could host this service, I realized that I could not host it on my hypervisor as I did with the Hyper-V service. My server is running Windows Server 2019 Hyper-V edition, which is a stripped down version of Windows Server meant for hypervisors. That means I am unable to install the DNS Server role on it. Admittedly, I did not try installing RSAT on it, but I have tendency to believe that would not work.

    Since the DnsServer module would be installed by default on my domain controller, I made the decision to host the DNS API on that server. I went about creating an appropriate service account and installed it as a service. Just like the Hyper-V API, the Windows DNS API is available on Github.

    Return to API Management

    At this point, I have API hosted on a few different machines plus the APIs hosted in my home lab clusters. This has forced me to revisit installing an API Management solution at home. Sure, no one else uses my lab, but that is not the point. Right now, I have a “service discovery” problem: where are my APIs, how do I call them, what is their authentication mechanism, etc. This is part of what API Management can solve: I can have a single place to locate and call my APIs. Over the next few weeks I may delve back into Gravitee.io in an effort to re-establish a proper API Management service.

    Going Public, Going Github

    While it may seem like I am “burying the headline,” I am going to start making an effort to go public with more of my code. Why? Well, I have a number of different repositories that might be of use to some folks, even as reference. Plus, well, it keeps me honest: Going public with my code means I have to be good about my own security practices. Look for posts on migration updates as I get to them.

    Going public will most likely mean going Github. Yes, I have some public repositories out in Bitbucket, but Github provides a bit more community and visibility for my work. I am sure I will still keep some repositories in Bitbucket, but for the projects that I want public feedback on, I will shift to Github.

    Pop Culture Reference Section

    The title is a callout to Pitch Perfect 2. You are welcome.

  • To define or document? Methods for generating API Specifications

    There is an inherent “chicken and egg” problem in API development. Do we define a specification before creating an API implementation (specification-first), or do we implement an API and generate a specification from that implementation (code-first)?

    Determining how to define and develop your APIs can have impacts on future consumption and alternative implementations, so it is important to evaluate the purpose of your API and identify the most effective method for defining your API.

    API (or Code) First

    In API- (or, code-) first development, developers setup a code project and begin coding endpoints and models immediately. Typcially, developers treat the specification as generated documentation of what they have already done. This method means the API definition is fluid as implementation occurs: oh, you forgot a property on an object or a whole path? No problem, just add the necessary code. Automated generation will take care of updating your API specification. Also, when you know that the API you are working on is going to be the only implementation of that interface, this solution makes the most sense, as it requires less upfront work from the development team and it is easier to change the API specification.

    Specification First

    On the other hand, specification-first development entails defining the API specification first. Developers define the paths, actions, responses, and object models before we write any code at all. From there, we can generate both client and server code. This requires more effort: developers must define all the necessary changes prior to the actual implementation on either the client or server side. This upfront effort generates a truly reusable specification, since the API specification is not generated from a single implementation of the API. This method is most useful when developing specifications for APIs that will have multiple implementations.

    What should you use?

    Whichever you want. I’m really not here to tout the benefits of either one. In my experience, the choice depends primarily on answering the following question: Will there be only one implementation of this API? If the answer is yes, then code-first would be my choice, simply because it does not require a definition process up front. If, however, you anticipate more than one implementation of a given API, it is wise to start with the specification first. Changes to the specification should be more deliberate, as they will affect a number of implementations.

    Tools to help

    No matter your selection, there are tools to aid you in both cases.

    For specification first development, the OpenAPI Generator is a great tool for generating consumer libraries and implementations. Once you create the API specification document, the OpenAPI Generator can generate a wide array of clients, servers, and other schemas. I have used the generator to create Axios-based typescript clients for user interfaces as well as model classes for server side development. I have only ever used the OpenAPI generator in a manual generation process: when the developer changes the specification, they must also regenerate the client and server code. This, however, is not a bad thing: specification changes typically must be very deliberate and take into account all existing implementations, so keeping the process manual forces that.

    In my API-first projects, I typically use the NSwag toolchain to both generate the specification from the API code as well as generate any clients that I may need. NSwag’s toolchain integrates nicely with .Net 5 projects and can be configured to generate specifications and additional libraries at build time, making it easy to deploy these changes automatically.

    It is worth nothing that both NSwag and the OpenAPI Generator can be configured to perform both of these methods, my examples above come simply from my own experience with each.