Rebuilding my homelab: Suffering as a service

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • cloudseeder

    One-click install internet appliances that operate on your terms. Transform your home computer into a sovereign and secure cloud.

  • This was a tremendous write-up. I appreciate the detail including your ingressd setup. I agree, though, that this is a pain. It's for this reason that we made Cloud Seeder [1] so you can have hands-free setup of your homelab and IPv6rs for painless ingress [2]

    [1] https://github.com/ipv6rslimited/cloudseeder

    [2] https://ipv6.rs

  • ansible-lint

    Discontinued Best practices checker for Ansible [Moved to: https://github.com/ansible-community/ansible-lint] (by ansible)

  • 6. Probably something else

    [0]: https://github.com/ansible/ansible-lint

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • fedora-coreos-tracker

    Issue tracker for Fedora CoreOS

  • I'm no blogger but here's a quick writeup.

    # Setup

    Setup was a process, no clicking through a setup UI for this one. I had to set up a web server on a second machine to serve the ignition yaml to the primary machine.

    It was a very manual process despite CoreOS's promise of automation. There were many issues like https://github.com/coreos/fedora-coreos-tracker/issues/155 where the things I wanted to configure were just not configurable. I had some well-rehearsed post-setup steps to rename the default user from "core" to my name, set keyboard layout, move system dirs to a set of manually-created btrfs subvolumes, etc.

    # Usage

    The desktop and GUI worked flawlessly. All I had to do was install i3 and lightdm via rpm-ostree. Zero issues, including light 2D gaming like Terraria.

    Audio was a pain. My speakers are fine. My mic worked out of the box in ALSA, but Pipewire didn't detect it for some reason, so I had to write some manual pipewire config to add it manually. Also, I had to learn what ALSA and Pipewire are...

    I ran just about everything, including GUI apps, in distrobox/arch containers. This was very nice: Arch breaks itself during updates somewhat often and when that happens I can just blow the container away and install pkglist.txt and be back in 5 minutes. I get the benefits of Arch (super fast updates) without the downsides (upgrade brittleness). I plan on keeping distrobox even once I leave.

    # Updates

    I disabled Zincati (the unattended update service) and instead I ran `rpm-ostree upgrade` before my weekly reboots.

    This is the reason I'm leaving. This was supposed to be the smoothest part of CoreOS, but those upgrades failed several times in the past year. To CoreOS's credit my system was never unbootable, but when the upgrades failed I had to do surgery using the unfamiliar rpm-ostree and its lower level ostree to get the system updating again. As of now it's broken yet again and I'm falling behind on updates. I could solve this, I've done it before! But I've had enough. I'm shuffling files to my NAS right now and preparing to hop distros.

  • home-ops

    Wife approved HomeOps driven by Kubernetes and GitOps using Flux

  • This is incredibly popular a take, and this anti-k8s is rapidly upvoted almost every time.

    The systemd hate has cooled a bit, but it too functions as a sizable attractor for disdain & accusation hurling. Let's look at one of my favorite excerpts from the article, on systemd:

    > Fleet was glorious. It was what made me decide to actually learn how to use systemd in earnest. Before I had just been a "bloat bad so systemd bad" pleb, but once I really dug into the inner workings I ended up really liking it. Everything being composable units that let you build up to what you want instead of having to be an expert in all the ways shell script messes with you is just such a better place to operate from. Not to mention being able to restart multiple units with the same command, define ulimits, and easily create "oneshot" jobs. If you're a "systemd hater", please actually give it a chance before you decry it as "complicated bad lol". Shit's complicated because life is complicated.

    Shits complicated because life is complicated. In both cases, having encompassing ways to compose connectivity has created a stable base - starting point to expert/advanced capable - that allowed huge communities to bloom. Rather than every person being out there by ourselves, the same tools work well for all users, the same tools are practiced with the same conventions.

    Overarching is key to commanlity being possible. You could walk up to my computer and run 'systemd cat' on any service on it, and quickly see how stuff was setup (especially on my computers which make heavy use of environment variables where possible); before every distro and to a sizable degree every single program was launched & configured differently, requires plucking through init scripts to see how or if the init script was modified. But everything has a well defined shape and form in systemd, a huge variety of capabilities for controlling launch characteristics, process isolation, ulimits, user/group privileges, special tmp directories is all provided out of the box in a way that means there's one man page to go to, and that's instantly visible, so we don't have to go spelunking.

    The Cloud Native paradigm that Kubernetes practices is a similar work of revelation, offering similar batteries included capabilities. Is it confusing having pods, replicasets, and services? Yes perhaps at first. But it's unparalleled that one just POSTs resources one wants to an API-server and let's the system start & keep that running; this autonomic behavior is incredibly freeing, leaving control loops doing what humans have had to shepherd & maintain themselves for decades; a paradigm break turning human intent directly into consistent running managed systems.

    The many abstractions/resource types are warranted, they are separate composable pieces that allow so much. Need to serve on a second port? Easy. Why are there so many different types? Because computers are complex, because this is a model of what really is. Maybe we can reshuffle to get different views, but most of that complexity will need to stay around, but perhaps in refactores shapes.

    And like systemd, Kubernetes with it's Desired State Management and operators, it creates a highly visible highly explorable system; any practitioner can walk up to any cluster and start gleening tons of information from it, can easily see it run.

    It's a wrong hearted view to think that simpler is better. We should start with essential complexity & figure out simultaneously a) how to leverage and b) how to cut direct paths through our complex capable systems. We gain more by permitting and enabling than by pruning. We gain my by being capable of working at both big and small scales than we gain by winnowing down/down scoping our use cases. The proof is in the pudding. Today there's hundreds of guides one can go through in an hour to setup & get started running some services on k3s. Today there's a colossal communities of homelab operators sharing helm charts & resources (ex: https://github.com/onedr0p/home-ops), the likes of which has vastly outclassed where we have stood before. Being afraid of & shying away from complexity is a natural response, but i want people to show that they see so many of the underlying simplicities & conceptions that we have gotten from kube that do make things vastly simpler than the wild West untamed world we came from, where there weren't unified patterns of API servers & operators, handling different resources but all alike & consistent. To conquer complexity you must understand it, and I think very few of those with a true view of Kubernetes complexity have the sense that there are massive opportunities for better, for simpler. To me, the mission, the goal, the plan should be to better manage & better package Kubernetes to better onboard & help humans through it, to try to walk people into what these abstractions are for & shine lights on how they all mirror real things computers need to be doing.

    (Technical note, Kubernetes typically runs 0 vm's, it runs containers. With notable exceptions being snap-in OCI runtimes like Firecracker and Kata which indeed host pods as vms. Kine relies on containers are far more optimizable; works like Puzzlefs and Composefs CSIs can snap-in to allow vastly more memory-and-storage-efficient filesystems to boot. So many wonderful pluggable/snappable layers; CNI for networking too.)

  • cluster-template

    A template for deploying a Talos Kubernetes cluster including Flux for GitOps

  • For populating a homelab Kubernetes cluster, onedr0p has a very nice Flux template: https://github.com/onedr0p/cluster-template

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • How do you manage your deployments?

    4 projects | /r/selfhosted | 16 Dec 2022
  • Almost 1yr in the making, finally got my Kubernetes DevOps/IaC/CD set up going, fully self-hosted cloud equiivalent. GLEE!!! (AMA?)

    9 projects | /r/selfhosted | 9 Aug 2022
  • I must announce the immediate end of service of SSLPing

    5 projects | news.ycombinator.com | 11 Apr 2022
  • Keeping track of the latest releases of Applications on Kubernetes

    2 projects | /r/kubernetes | 8 Apr 2022
  • Is it better to have multiple small home servers?

    3 projects | /r/selfhosted | 18 Dec 2021