Project Anemos updates

I’ve been pretty quiet about my project Anemos lately, but there has been steady work behind the scenes. In fact, I have now reached two milestones at once, so it seems like a good time to write about it again.

What is this again?

In the likely case that you are not familiar with Anemos: it is a set of tools for efficiently managing servers according to the Infrastructure as Code paradigm. Its goals are:

  • to work in any environment (cloud/bare metal/VM…)
  • to not require any additional infrastructure
  • to do all the work on the target machine (which is usually high bandwidth)

This is probably best explained with some history. I manage a few servers, but I often do not make changes to them for prolonged periods of time. Hence, I tend to forget stuff. If e.g. something breaks after a while, I may not remember the last changes I made to its configuration. The - in my opinion - best recipe against this is Infrastructure as Code. You have a git log, a source of truth, and a certain reproducibility that helps tremendously in such situations.

To this end, I created makeimg. I started managing some of my servers with it several years ago, when it could only create Arch Linux images. This was a great improvement, but it came with some caveats. First, it was very inefficient in terms of bandwidth. I would build a disk image on my laptop (i.e. download all the packages), only to then upload the entire disk image to the server to be deployed. Disk images compress well to some extent, but it was still something you wouldn’t do unless you were someplace with decent connectivity. The second caveat was that I had to rely on mechanisms specific to each machine to be able to deploy the disk image. More specifically, for the two VM hosts that I initially used it for, I had access to the hypervisor. But I was unable to extend this practice to my two cloud VMs, where I had no such access. Those were for some time managed by a set of shell scripts that made calls to the cloud providers proprietary API to re-install FreeBSD (the best OS the provider had on offer) and then set up services as desired via SSH.

The unhappiness with that situation is what gave birth to Anemos. I wanted a tool that let me apply the same tools and practices everywhere. And at the same time solve the mentioned caveats. Well, lo and behold, there is progress…

Milestone 1 - dog food all around

I am proud to announce that I am now managing all my four private servers with Anemos! You can find the source code for all of them in the ”Demo projects” section of the Anemos documentation. There are no packages for Anemos yet, so some custom “installing” is performed, but other than that the usage is pretty much as intended. And it solved many issues I had before:

  • It works on my cloud VMs, without access to a hypervisor
  • It allows me to install Alpine on my cloud VMs, even though the provider does not support it
  • I no longer need to upload disk images anywhere, making the whole process much faster and less bandwidth-intensive

You can get a glimpse of what it looks like by watching one of the demo videos!

Milestone 2 - first hardware inventory prototype

While working without any additional infrastructure was an explicit goal, not working with additional infrastructure was not a goal 😀. Or, put differently, I intend for Anemos to be able to integrate into larger infrastructure setups. One aspect that becomes important and useful as you shift to larger setups is a hardware inventory. Depending on the feature set, these are often also called asset management systems. At my previous job, we used Collins (plus some custom tooling). I really liked the visual presentation that Collins provided, and it was powerful and extensible and had an API. However, the only CLI tools available (at that time?) were pretty crappy. It was also written in Scala, effectively discouraging me from contributing (I will not go near sbt in my life ever again). It also had a bunch of other quirks, e.g. with regards to operations.

My ambitions for Anemos are to not only provide integration points to use with an existing asset management system, but to even provide its very own, and hopefully improve some things in the process.

As such, I am also pleased to announce the existence of Daphnis. It is a - quite experimental - adventure into the domain of asset management systems. Its appearance is heavily based on Collins. Its functionality is very limited at the moment, but it’s enough to start playing with it.

In fact, I have set up a demo instance (no up-time guarantees). Some of my servers update their information there with every Anemos-deploy, so unless it’s broken you should be able to see some data in there. Unfortunately, my servers are very simple VMs, so the data is much less interesting than it would be for real hardware, but it does give a rough idea.

What’s next?

Daphnis is certainly not very useful in its current form. But it will serve as playground for various ideas. The Anemos components are certainly useful enough that I’m considering tagging some pre-release versions or such. Several areas need some work, though. And feature ideas I have plenty.

I think one piece that’s really missing right now is an Anemos “testing” initramfs - a version of the real one that you can just boot into and poke around. You can sort of do this yourself by providing an appropriate payload, but doing this should ideally be as simple as running a command. However, there are some challenges to this: on certain systems, the initramfs is booted by overwriting an existing initramfs - not exactly what you want if you just plan on poking around rather than installing a new system.

Other than that, maybe supporting more target operating systems in makeimg, and also providing more makeimg examples would probably be beneficial.

Of course, some diversity testing is also needed. So if the entire concept sounds interesting to you, why not reach out to the mailing list? As long as you have some means of recovery for your server(s) - vendor-supplied remote re-imaging, PXE-boot, hypervisor or physical access - there is not much to lose, right? 😅

As always, feel free to direct any comments or questions to my public inbox.