The Home Lab (Part 1)
As a technologist, specifically, someone who works in the IT Operations/DevOps/SRE space, one of my big passions is technology infrastructure. Over the years, I've cobbled together meagre home lab environments with old desktops, laptops, and workstations scavenged from friends, employers, or eBay.
However, as my life has become more and more focused on IT automation, that environment has grown and evolved.
Version 1 of what I would consider a true "Home Lab" with actual rackmount servers, was comprised of a bunch of SuperMicro 1U servers I got from UnixSurplus that I used to stand up a basic Apache CloudStack environment. This then necessitated moving from simple consumer-grade Netgear switches and my ISP's provided router to some Mikrotik switches and router. This worked for a bit and helped me get my job done and make contributions to open source projects.
Version 1.5 evolved away from CloudStack (as I was no longer interacting with it regularly at work due to changing jobs) and moved into a more static vSphere environment that I used the same servers to build and licensed through the VMUG Advantage Program (and some less legitimate means as well). This gave me the foundation to run my first Active Directory environment (having spent basically my entire career in UNIX and Linux environments) and starting to become more comfortable in more heterogenous environments.
This worked pretty well, but also used a LOT of electicity and the apartment I lived in didn't have the best of wiring. During the New Jersey summers, if I turned on my window-unit air conditioner while having more than 1 or 2 of these servers online, I would trip the breaker for the power circuit for that half of the apartment. Be comfortable or run servers? I ended up primarily focusing on comfort and would let my homelab sit idle for about half of the year.
Then, as my work at IRL fired up, I bought one ASRock Rack X470D4U server to replace the 5 SuperMicros which were just getting older and older with their Opeteron processors in an effort to set up a box with more oomph and lower power draw.
This worked okay with VSphere, but I was getting tired of paying the VMUG license and dealing with the shrinking window of hardware supported by VMWare, so as I bought my first house and started building a more serious homelab, I bought two more of these ASRock Rack servers and moved from VSphere to Proxmox.
Having spent a bunch of time in the KVM world when working with CloudStack, this was nice! It was a bit kludgy to be consistently patching and tweaking Proxmox to not pop up license nags, have a dark theme, and hava a single, highly available endpoint for API calls or UI access. Proxmox also gave me my first real exposure to Ceph.
However, one Proxmox update went sideways, requiring me to learn more than I wanted to about trying to salvage data from a degraded Ceph cluster, and their poor VM migration process just left me wanting something better.
As I've called out the other hardware and many of the vendors in here, it's also worth me mentioning that as I was waffling about which hypervisor to use and migrating, I also picked up 3 HP DL360 G9 servers from TechMikeNY and a QNAP 12-bay NAS to provide some off-cluster storage and additional compute nodes to keep VMs running DNS, DHCP, and other serivces as I moved things around.
Enter oVirt. I'd played with oVirt several years prior, briefly, but found it obnoxious to use as I was running it on hardware without IPMI to make oVirt happy, but now I was running on actual servers with IPMI (albeit a bit janky because ASRock Rack isn't a major vendor). I started to become happy. The lack of Terraform provider sucked, but I could make due.
Then, Red Hat announced that they were shelving the Red Hat Enterprise Virtualizaion (RHEV) product that oVirt is the upstream of. Fuck. While the oVirt project has come forth and stated that they're not ending development, I just don't see where Red Hat/IBM are going to continue to spend engineer hours they pay for working on a project that they don't have any revenue coming from.
So, what's replacing RHEV? OpenShift Virtualization. Okay, this was actually one of the reasons why I was jumping around hypervisors anyway - trying to build and test Kubernetes environments in not-a-cloud is astoundingly difficult. So, I figured this would be an interesting weekend project of migrating from oVirt to KubeVirt under OKD (the open source upstream of OpenShift).
Little did I know what was ahead of me.
(Spoiler alert: it ends up working and the next post is going to go over the "how" I got it working.)