Category: Architecture

  • Can Yellowstone teach us about IT infrastructure management?

    It seems almost too fitting that, at a time when the popularity of gritty television like Yellowstone and 1883 is climbing, that I write to encourage you to stop taking on new pets and to start running a cattle ranch.

    Pet versus Cattle – The IT Version

    The “pet versus cattle” analogy is often used to describe different methodologies for IT management. You treat a pet by giving it a name like Zeus or Apollo. You give it a nice home home. You nurse it back to health when it’s sick. Cattle, on the other hand, get an ear tag with a number and are sent to roam the field until they are needed.

    Carrying this analogy into IT, I have seen my shared of pet servers. We build them up, nurse them back to health after a virus, upgrade them when needed, and do all the things a good pet owner would do. And when these go down, people notice. “Cattle” servers, on the other hand, are quickly provisioned and just as quickly replaced, often without any maintenance or downtime.

    Infrastructure as Code

    In its most basic definition, infrastructure as code is the idea that an infrastructure’s definition can be defined in some file (preferably source controlled). Using various tools, we can take this definition and create the necessary infrastructure components automatically.

    Why do we care if we can provision infrastructure from code? Treating our servers as cattle requires a much better management structure than “have Joe stand this up for us.” Sure, Joe is more than capable of creating and configuring all of the necessary components, but if we want to do it again, it requires more of Joe’s time.

    With the ability to define an infrastructure one time and deploy it many times, we gain the capacity to worry more about what is running on the infrastructure than the infrastructure itself.

    Putting ideas to action

    Changing my mindset from “pet” to “cattle” with virtual machines on my home servers has been tremendously valuable for me. As I mentioned in my post about Packer.io, you become much more risk tolerant when you know a new VM is 20 unattended minutes away.

    I have started to encourage our infrastructure team to accept nothing less than infrastructure as code for new projects. In order to lead by example, I have been working through creating Azure resources for one of our new projects using Terraform.

    Terraform: Star Trek or Nerd Tech?

    You may be asking: “Aren’t they both nerd tech?” And, well, you are correct. But when you have a group of people who grow up on science fiction and are responsible for keeping computers running, well, things get mixed up.

    Terraform the Hashicorp product is one of a number of tools which allow infrastructure engineers to automatically provision environments to host their products using various providers. I have been using its Azure Resource Manager provider, although there are more than a few available.

    While I cannot share the project code, here is what I was able to accomplish with Terraform:

    • Create and configure an Azure Kubernetes Service (AKS) instance
    • Create and configure an Azure SQL Server with multiple databases
    • Attach these instances to an existing virtual network subnet
    • Create and configure an Azure Key Vault
    • Create and configure a public IP address from an existing prefix.

    Through more than a few rounds of destroy and recreate, the project is in a state where it is ready to be deployed in multiple environments.

    Run a ranch

    So, perhaps, Yellowstone can teach us about IT Infrastructure management. A cold and calculated approach to infrastructure will lead to a homogenous environment that is easy to manage, scale, and replace.

    For what it’s worth, 1883 is simply a live-action version of Oregon Trail…

  • Inducing Panic in a Software Architect

    There are many, many ways to induce panic in people, and, by no means will I be cluing you in to all the different ways you can succeed in sending me into a tail spin. However, if there is one thing that anyone can do that immediately has me at a loss for words and looking for the exit, it is to ask this one question: “What do you do for a living?”

    It seems a fairly straight-forward question, and one that should have a relatively straight forward answer. If I said “I am a pilot,” then it can be assumed that I fly airplanes in one form or another. It might lead to a conversation about the different types of planes I fly, whether it’s cargo or people, etc. However, answering “I am a software architect” usually leads to one of two responses:

    1. “Oh..”, followed by a blank stare from the asker and an immediate topic change.
    2. “What’s that?”, followed by a blank stare from me as I try to encapsulate my various functions in a way that does not involve twelve Powerpoint slides and a scheduled break.

    In social situations, or, at the very least, the social situations in which I am involved, being asked what I do is usually an immediate buzz kill. While I am sure people are interested in what I do, there is no generic answer. And the specific answers are typically so dull and boring that people lose interest quickly.

    Every so often, though, I run across someone in the field, and the conversation turns more technical. Questions around favorite technology stacks, innovate work in CI/CD pipelines, or some sexy new machine learning algorithms are discussed. But for most, describing the types of IT Architects out there is confusing because, well, even we have trouble with it.

    IT Architect Types

    Redhat has a great post on the different types of IT architects. They outline different endpoints of the software spectrum and how different roles can be assigned based on these endpoints. From those endpoints they illustrate the different roles an architect can play, color-coordinated to the different orientations along the software spectrum.

    However, only the largest of companies can afford to confine their architects to a single circle in this diagram, and many of us where one or more “role hats” as we progress through our daily work.

    My architecture work to this point has been primarily developer-oriented. While I have experimented in some of operations-oriented areas, my knowledge and understanding lies primarily in the application realm. Prior to my transfer to an architecture role, I was an engineering manager. This previous role exposed me to much more of the business side of things, and my typical frustrations today are more about what we as architects can and should be doing to support the business side of things.

    So what do I do?

    In all honesty, I used to just say “software developer” or “software engineer.” Those titles are usually more generally understood, and I can be very generic about it. But as I work to progress in my own career, the need for me to be able to articulate my current position (and desired future positions) becomes critical.

    Today, I try to phrase my answers around being a leader in delivering software that helps customers do their job better. It is usually not as technical, and therefore not as boring, but does drive home the responsibilities of my position as a leader.

    How that answer translates to a cookout, well, that always remains to be seen.

  • Designing for the public cloud without breaking the bank

    The ubiquity and convenience of cloud computing resources are a boon to companies who want to deliver digital services quickly. Companies can focus on delivering content to their consumers rather than investing in core competencies such as infrastructure provisioning, application hosting, and service maintenance. However, for those companies who have already made significant investment into these core competencies, is the cloud really better?

    The Push to (Public) Cloud

    The maturation of the Internet and the introduction of public cloud services has reduced the barrier of entry for companies who want to deliver in the digital space. What used to take a dedicated team of systems engineers and a full server/farm buildout can now be accomplished by a few well-trained folks and a cloud account. This allows companies, particularly those in their early growth phases, to focus more on delivering content rather than growing core technological competencies.

    Many companies with heavy investment in core competencies and private clouds are making decisions to move their products to the cloud. This move can offer established companies a chance to “act as a startup” by refining their focus and deliver content faster. However, when those companies look at their cloud bill, there can be an element of sticker shock. [1]

    Why Move

    For a company with a heavy investment in its own private cloud infrastructure, why would you move?

    Inadequate Skillsets for Innovative Technology

    While companies may have significant investments in IT resources, they may not have the capability or desire to train these resources to maintain and manage the latest technologies. For example, managing a production-level Kubernetes cluster requires skills that may not be present in the company today.

    Managed Services

    This is related to the first, but, in many cases, it is much easier to let the company that created the service host it themselves, rather than trying to host it on your own. Things like Elastic or Kafka can be hosted internally, but letting the experts run your instances as a managed service can save you money in the long run.

    Experimentation

    Cloud accounts can provide excellent sand box environments for senior engineers to prove their work quickly, focusing more on functionality than provisioning. Be careful, though, this can lead to skipping the steps between Proof of Concept and Production.

    Building for the future

    As architects/senior engineers, how can we reconcile this? While we want to iterate quickly and focus on delivering customer value, we must also be mindful of our systems’ design.

    Portability

    The folks at Andreessen Horowitz used the term “repatriation” to describe a move back to private clouds from public clouds. As you can imagine, this may be very easy or very difficult, depending on your architecture: repatriating a virtual machine is much easier than, say, repatriating an API Management service from the public cloud to a private offering.

    As engineers design solutions, it is important to think about the portability of the solution you are creating. Can the solution you are building be hosted anywhere (public or private cloud) or is it tailor made to a specific public cloud? The latter may allow for quick ramp up, but will lock you in to a single cloud with little you can do to improve your spend.

    Cost

    When designing a system for the cloud, you must think a lot harder about the pricing of the individual components. Every time you add another service, storage account, VM, or cluster on your design diagram, you are adding to the cost of your system.

    It is important to remember that some of these costs are consumption based, which means you will need to estimate your usage to get a true idea of the cost. I can think of nothing worst than a huge cloud bill on a new system because I forgot to estimate consumption charges

    SOLID Principles

    SOLID design principles are typically applied to object-oriented programming, however, some of those same principles should be applied to larger cloud architectures. In particular, Interface Segregation and Liskov substitution (or, more generally, design by contract) facilitate simpler changes to services by abstracting the implementation from the service definition. Objects that are easier to replace are easier to repatriate into private clouds when profit margins start to shrink.

    What is the Future?

    Do we need the public cloud? Most likely, yes. Initially, it will increase your time to market by removing some of the blockers and allowing your engineers to focus on content rather than core competencies. However, architects and systems engineers need to be aware of the trade offs in going to the cloud. While private clouds represent a large initial investment and monthly maintenance, public cloud costs, particularly those on consumption based pricing, can quickly spiral out of control if not carefully watched.

    If your company already has a good IT infrastructure and the knowledge in place to run their own private clouds, or if your cloud costs are slowly eating into your profit margins, it is probably a good time to consider repatriation of services into private infrastructure.

    As for the future: a mix of public cloud services and private infrastructure services (hybrid cloud) is most likely in my future, however, I believe the trend will be different depending on the maturity and technical competency of your staff.

    References

    1. “Cloud ‘sticker shock’ explored: We’re spending too much”
    2. “The Cost of Cloud, a Trillion Dollar Paradox”