Basic Structure and Common Terms
The following definitions and clarifications will make understanding the rest of this document much easier.
Firstly, for my own pedantry: all discussed systems will fall into 2 categories; Windows and Linux. There are other extant platforms in the services being used, such as MacOS and FreeBSD, and these have several behaviours which are unique to either platform. However, the majority of these changes are sufficiently abstracted away from the user so that both MacOS and FreeBSD can be classified under the "Linux" moniker. When differences between any of these platforms are significant to a user and/or administrator, they will be denoted.
snowglobe
: The primary server where services are hosted.black-ice
: Offsite storage backup.- ZFS: Zed File System; this is the backbone of the storage management system used by
snowglobe
. This manages drive redundancy, so-called "snapshots", and efficient data replication toblack-ice
- Docker: Containerisation platform used to isolate services and software running on
snowglobe
- Docker Compose: YAML-based system for managing multiple Docker containers
- NFS: Network File System; similar to Samba, only compatible with Linux devices
- Samba/Windows Network Share Drive: Network storage; compatible with both Linux and Windows devices. Documentation online may also refer to this as SMB or CIFS; SMB (Server Message Block) is the underlying communication protocol for Samba, CIFS (Common Internet File System) was an attempted rename by Microsoft.
[this section subject to expansion upon request]
Hardware Description and Management
Front-End Management
The interface you will most often use to interact with services as an end user and as an administrator is through a web interface. Delegation communication between services can be done in several ways; a common easy-to-implement methodology is allocating a port number to each unique service. While this is done for non-TCP/UDP communication protocols as they tend to have sensible defaults, remembering many ports and their allocations is difficult for the layman as well as the administrator.
An alternative to this structure is to have a so-called reverse-proxy service, which accepts messages from many endpoints, and redirects them to the appropriate services depending on header information. In previous iterations of the server, this was done through Nginx Proxy Manager. While this exposes a useful GUI for management of proxies and endpoints, the underlying configuration mechanism was somewhat obfuscated, making backup and recovery situations difficult to impossible. Additionally, development on the project is performed primarily by a single individual, making updates slow and the future of the project somewhat uncertain.
The current reverse-proxy service in use by the server is called Caddy. All reverse-proxy information is saved in a single configuration file, called a Caddyfile. Additionally, in 2020, the project was purchased by Ardan Labs (a web API and Software-as-a-Service company), and the original developer hired to continue his work. This means that the software will continue to be developed for some time, and has clear financial backing to allow for it's development. The base project does not contain a web-GUI, however there are several projects that build a GUI on top of Caddy linked in this issue on the Caddy github page, last updated in 2024.
Web Endpoint Management
As discussed above, all reverse-proxy information is saved in a single configuration file, called a Caddyfile. Full documentation on the Caddyfile syntax can be found here, however adding and removing values is relatively simple. Simply add or remove the relevant section, using the example syntax already present in the file, then run the following command to update the configuration loaded by Caddy without restarting the container:
sudo docker compose exec -w /etc/caddy caddy caddy reload
Currently Bound Domains
All URLs listed below allow HTTPS communication, and it is recommended to use https://
when possible.
Caddy by default handles certificate generation and updating in the background, without intervention by the user.
URL | In Use | Service name | Service description |
---|---|---|---|
audiobook.blizzard.systems |
User-facing with admin controls | Audiobookshelf | audiobook, ebook, and podcast bookshelf |
cockpit.blizzard.systems |
Admin-facing | Cockpit | Server management interface |
subtitles.blizzard.systems |
Admin-facing | Bazarr | Subtitle management of movies and TV |
ci.blizzard.systems |
User-facing | WoodpeckerCI | Continuous Integration and Deployment for Forgejo |
home.blizzard.systems |
User-facing | Flame | Landing page and index of publicly accessible services |
cookbook.blizzard.systems |
User-facing | Baker's Menagerie | Self-hosted and Menagerie-developed cookbook |
git.blizzard.systems |
User-facing with admin controls | Forgejo | Git code forge |
jellyfin.blizzard.systems |
User-facing with admin controls | Jellyfin | Media streaming platform |
feed.blizzard.systems |
User-facing with admin controls | Miniflux | RSS feed reader and archiver |
qbt.blizzard.systems |
Admin-facing | QBittorrent | Download client |
indexers.blizzard.systems |
Admin-facing | Prowlarr | *arr indexers |
movies.blizzard.systems |
User-facing | Radarr | Movie search |
tv.blizzard.systems |
User-facing | Sonarr | TV search |
tools.blizzard.systems |
User-facing | ITTools | Software Dev helper tools |
Back-End Management
snowglobe
Update Scheme
snowglobe
runs the Debian Linux distribution.
This distro follows the point release design model, which means that the distribution provides packages and guarantees their usability with a roughly stable API for the lifetime of the release.
Based on previous release information, major Debian releases are cut every 2 years, and supported for 3 years (Or 5 years including exclusively security patches).
This makes it an ideal distribution for low-maintenance servers such as snowglobe
.
Individual packages, where possible, are updated live in the background, with the help of the unattended-upgrades
package.
Using the preset defaults, this checks for package upgrades every 12 hours, and installs valid packages once a day.
However, some packages such as the kernel, ZFS, and Docker, require a reboot to be installed and used.
ZFS and Docker are both services that integrate very tightly with the kernel, and as such are loaded early in the boot process.
Unloading these modules to load the new versions is a manually intensive process, and is thus not recommended by this document.
Similarly, the kernel does support live patching, however Debian does not have a supported model for using this functionality.
There are other distributions with this support, however most are Enterprise Linux distributions such as RedHat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLE) which require a subscription (in return for additional security and support guarantees).
To my knowledge, the only non-enterprise Linux distribution that supports kernel-patching is Ubuntu, through it's own subscription Ubuntu Pro service.
Debian's packages are updated in the following situations:
- Security updates provided by the Debian Security Team, courtesy of
stable-security
package repository - Maintainer-provided updates with weak stability guarantees, coutesy of
stable-backports
package repository - Maintainer-provided updates with strong stability guarantees, courtey of point releases in the default
stable
package repository - Maintainer-proposed updates which will eventually be added to the aforementioned point release, courtesy of the
stable-updates
package repository
In the Debian 12 install of snowglobe
, all of these package repositories have been enabled, as all are enabled by default.
However, as stable-updates
are not necessary, they will be disabled.
Additionally, stable-backports
was particularly useful for installing a supported version of the kernel, and it's related ZFS package.
snowglobe
contains an Intel Arc A380 GPU, useful for its highly efficient transcoding engine and AV1 support.
However, as this hardware is relatively new, it requires a Linux kernel version of at least 6.2, while Debian 12 uses kernel version 6.1.
This was resolved by installing a stable-backports
kernel.
With the release of trixie, a backports kernel is no longer necessary.
Point releases are released every 2 months. Packages in these point releases may need to be manually upgraded. Additionally, Docker is distributed in a unique repository, and as such updates are pushed at a rate determined by Docker Inc. However, these updates are not strictly necessary, and can be run along with the aforementioned point releases. Tailscale (a mesh VPN network for emergency out-of-network management activities) is distributed in a similar manner, and can be updated on the same schedule.
Software Dependencies for Behaviour Replication
Below is a list of software that is directly used on the server, along with their use reasons.
Note that the below chart does not include lib*
packages, for readability purposes.
The apt
package manager should install all required libraries based on the below-listed packages, but note that some libraries may need to be manually installed.
Package Name | Use case |
---|---|
amd64-microcode |
Processor-level firmware updates |
apcupsd |
APC Uninterruptible Power Supply daemon; monitor power usage and safe shutdown |
apt-listchanges |
Notify when updating of major changes in packages |
apt-listbugs |
Notify when packages have potential bugs |
btop |
TUI resource monitor |
btrfs-progs |
BTRFS (server root file system) Management utilities |
cockpit |
Web management console |
docker-ce |
Docker Community Edition |
docker-compose-plugin |
Support for Docker Compose |
firmware-linux-free |
self-explanatory |
firmware-linux-nonfree |
self-explanatory |
firmware-misc-nonfree |
self-explanatory |
git |
self-explanatory |
handbrake |
Manual transcoding request; used with X11-Forwarding SSH session for GUI |
intel-gpu-tools |
self-explanatory |
intel-media-va-driver-non-free |
self-explanatory |
lynx |
Terminal web browser |
man-db |
Database of manpages |
manpages |
Manual pages for installed packages |
mesa-vdpau-drivers |
Additional generic video card drivers |
mesa-vulkan-drivers |
Additional generic video card drivers |
ncdu |
TUI disk usage analyzer |
nmap |
Network mapping utilities; useful for port probing and determining if a given hostname is available |
nvme-cli |
NVMe drive firmware interface; similar to smartmontools |
rsync |
Linux terminal file transfer protocol |
tailscale |
Mesh VPN for remote access |
tmux |
Terminal multiplexing; useful for starting commands that will continue to run after the SSH session has detached or closed |
tree |
Terminal visualiser of directory tree |
zfs-dkms |
ZFS Dynamic Kernel Module Support |
zfsutils-linux |
Utilities for managing ZFS file systems |
Docker Compose Overview
Docker, as you may be aware, is a platform used for containerisation of apps, allowing for separation between programs, reduction of attack surfaces of a given system, and easier dependency management.
Docker has a plugin called Docker Compose, which allows for the systems administrator to write container structure and configuration information into a standardized YAML file, and will be loaded on boot (optionally, by the administrator's configuration).
This is used to describe the vast majority of the services hosted on snowglobe
today.
These files are in the git repository that this wiki is attached to.
Below are some commonly used directives and structures, and their meanings.
For more information on Docker Compose, see the official documentation.
Note that there should be only one instance of each directive in each section.
For example, in the section below, a true docker-compose.yml
should only have one build
directive in the container_service_name
section.
Multiple instances here are used to give examples of possible syntaxes.
# Required base directive for defining containers
services:
# This can be anything; container names are declared further down
service_name:
# Oh hey, the container name!
container_name:
# A container build may be written simply as a build directive, and a URL to get the Dockerfile from...
build: https://git.blizzard.systems/github-mirrors/audiobookshelf.git
# ...or with a more complete description, pointing to a given Dockerfile
build:
context: https://git.blizzard.systems/github-mirrors/immich.git
dockerfile: server/Dockerfile
# The build argument can also include a tag for a git repo, such as a version tag
build: https://git.blizzard.systems/github-mirrors/audiobookshelf.git#v2.27.0
# Build directive can be used in conjunction with, or replaced by, an image directive...
image: yacht7/openvpn-client
# ...which can also include version tags...
image: linuxserver/qbittorrent:latest
# ...and where to get the docker container from
image: lscr.io/linuxserver/prowlarr:latest
# These are environment variables; usually used for configuration.
environment:
# Here are some common examples; all of these must be supported by the given container.
# Check your container's documentation for details.
# Used to set the timezone of the container
- TZ=America/Denver
# Used to set the user and group ID the container is running as; useful for resolving various possible permissions issues
- PUID=1001
- PGID=1001
# These are used to define real directories, and their corresponding mount points in a given container
volumes:
# This can be relative...
- ./sonar/config:/config
# ... or absolute...
- /mnt/glacier/TV:/tv
# ... and include custom permissions (Read-only used here)
- /etc/localtime:/etc/localtime:ro
# When to start and restart containers
# In case of a container crash, depending on what this is set to, it may attempt to restart the container
restart: unless-stopped
# Wait for the listed container(s) to start before starting this container
depends_on:
# Note that this value corresponds to the service name, rather than the container name
- service_name
Storage Management
SATA, SAS, and NVMe: Hardware Storage Management
Physical storage drives are separated into several categories. Firstly, they can be separated between Solid State Drive (SSD) and Hard Disk Drives (HDD; also commonly referred to coloquially as "spinning rust"). SSDs are significantly faster than HDDs in both read and write speeds, regularly orders of magnitude faster. This is contrasted by HDDs being lower cost-per-terabyte than SSDs.
Secondly, they can be separated into the physical form factor. Hard drives typically come in 2.5" width and 3.5" width standard sizes. While 2.5" drives are available, they are not recommended, as they are not as data-dense as 3.5" drives. 2.5" HDDs were originally used primarily in laptops, and have since been superceded by low-cost SSD drives. These do still have use-cases, however for long-term storage 3.5" hard drives are anecdotally better for stability and reliability. Consumer SSDs, by contrast, come in 2.5" and M.2 form factors. Enterprise SSDs also come in 3.5" varieties, however they are cost-prohibitive for consumer and pro-sumer use-cases. 2.5" SSDs are generally slower than M.2 SSDs, but as will be discussed in a moment, this is not always the case. M.2 SSDs have 3 major physical interfaces, B-key, M-key, and B+M-key. M.2 SSDs also come in several dimensions; 2230 (22mm x 30mm), 2242 (22mm x 42mm), 2280 (22mm x 80mm), and 22110 (22mm x 110mm) are the most common, with 2280 being by far the most prevalent. M.2 SSDs are almost always mounted directly onto the motherboard of a given computer.
Thirdly, they can be categorised into the communication standard with which they talk with the computer. Hard drives come in either SATA or SAS communication standards. SATA is the consumer-spec communication standard, while SAS is the server-grade standard. SAS drives can communicate faster than SATA drives, but are more expensive than SATA drives. Consumer SSDs come in SATA and NVMe. SAS SSDs exist, although they are very hard to find. All 2.5" SSDs use the SATA communication standard. M.2 SSDs can come in either SATA or NVMe. Some sources claim that the keying of an M.2 SSD can determine the communication standard, however keys have overlap in their supported communication standards. The best way to determine the communication standard of a given drive is to either look at it's datasheet or it's printed sticker. The only way to consistently determine the supported communication standards of the sockets on motherboards for M.2 SSDs is to consult the manual.
snowglobe
uses SATA HDDs, SATA 2.5" SSDs, SATA M.2 SSDs, and NVMe M.2 SSDs.
As the motherboard has a limited number of SATA ports, an add-in Host Bus Adapter (HBA) card is used.
This card supports both SATA and SAS drives, however only SATA cables are used.
Aside: RAID and HBA cards
HBA cards are commonly confused for RAID cards, they are very different. HBA cards directly expose individual devices transparently to the host device (the computer it is attached to), while RAID cards expose all drives as a single disk.
RAID stands for "Redundant Array of Inexpensive Disks", and is designed to allow multiple storage devices to contain information, such that if one storage device fails, the information is still accessible. RAID comes in three varieties; Hardware, Software, and Firmware.
Hardware RAID cards are somewhat volatile and prone to data loss; further discussions on RAID controllers can be found on the Level1Techs forum, on reddit, and many other locations. Parity calculations are done on a specialised IC on the card, meaning that if the RAID card itself fails, all data on the array is lost, even if you purchase a replacement card. Firmware RAID is done through the motherboard BIOS, and presents many of the same problems as Hardware RAID; it is functionally the same thing, but without the add-in card. Software RAID removes this barrier, while restricting the option of operating systems capable of accessing the drives. There are many solutions, however the solution used by this document is ZFS, as it has some of the clearest documentation, the longest lifetime, and wide support across Linux distributions.