Basic Structure and Common Terms
The following definitions and clarifications will make understanding the rest of this document much easier.
Firstly, for my own pedantry: all discussed systems will fall into 2 categories; Windows and Linux. There are other extant platforms in the services being used, such as MacOS and FreeBSD, and these have several behaviours which are unique to either platform. However, the majority of these changes are sufficiently abstracted away from the user so that both MacOS and FreeBSD can be classified under the "Linux" moniker. When differences between any of these platforms are significant to a user and/or administrator, they will be denoted.
snowglobe: The primary server where services are hosted.black-ice: Offsite storage backup.- ZFS: Zed File System; this is the backbone of the storage management system used by
snowglobe. This manages drive redundancy, so-called "snapshots", and efficient data replication toblack-ice - Docker: Containerisation platform used to isolate services and software running on
snowglobe - Docker Compose: YAML-based system for managing multiple Docker containers
- NFS: Network File System; similar to Samba, only compatible with Linux devices
- Samba/Windows Network Share Drive: Network storage; compatible with both Linux and Windows devices. Documentation online may also refer to this as SMB or CIFS; SMB (Server Message Block) is the underlying communication protocol for Samba, CIFS (Common Internet File System) was an attempted rename by Microsoft.
[this section subject to expansion upon request]
Hardware Description and Management
Front-End Management
The interface you will most often use to interact with services as an end user and as an administrator is through a web interface. Delegation communication between services can be done in several ways; a common easy-to-implement methodology is allocating a port number to each unique service. While this is done for non-TCP/UDP communication protocols as they tend to have sensible defaults, remembering many ports and their allocations is difficult for the layman as well as the administrator.
An alternative to this structure is to have a so-called reverse-proxy service, which accepts messages from many endpoints, and redirects them to the appropriate services depending on header information. The current reverse-proxy service in use by the server is called Caddy. All reverse-proxy information is saved in a single configuration file, called a Caddyfile. Additionally, in 2020, the project was purchased by Ardan Labs (a web API and Software-as-a-Service company), and the original developer hired to continue his work. This means that the software will continue to be developed for some time, and has clear financial backing to allow for it's development. The base project does not contain a web-GUI, however there are several projects that build a GUI on top of Caddy linked in this issue on the Caddy github page, last updated in 2024. Several of these have been tried on the server, with mixed results. The intent with the server is to only use long-term viable solutions, and so a GUI is not installed or available at this time.
Web Endpoint Management
As discussed above, all reverse-proxy information is saved in a single configuration file, called a Caddyfile. Full documentation on the Caddyfile syntax can be found here, however adding and removing values is relatively simple. Simply add or remove the relevant section, using the example syntax already present in the file, then run the following command to update the configuration loaded by Caddy without restarting the container:
sudo docker compose exec -w /etc/caddy caddy caddy reload
Remote Management of Server
Server motherboards often come with an onboard Baseband Management Controller, or BMC. This allows the user to remotely access not only the computer, but motherboard-level functionality, including but not limited to:
- VNC into the Server itself
- Remote power on, power off, and reset
- Physical fault logging (i.e. partial power supply failure or RAM failure)
- Thermal protection and intervention : If a component on the motherboard has a very high temperature for a long period of time, the BMC will power off the computer and log the error internally.
The BMC often has a dedicated Ethernet port on the motherboard, labeled with either "MGMT" or the brand-name variant. This should never be directly connected to the full internet, due to a variety of common and dangerous vulnerabilities. However, it can be (and in our case, is) connected to the local intranet without issues.
Currently Bound Domains
All URLs listed below allow HTTPS communication, and it is recommended to use https:// when possible.
Caddy by default handles certificate generation and updating in the background, without intervention by the user.
| URL | In Use | Service name | Service description |
|---|---|---|---|
movies.blizzard.systems |
User-facing | Radarr | Movie search |
tv.blizzard.systems |
User-facing | Sonarr | TV search |
subtitles.blizzard.systems |
Admin-facing | Bazarr | Subtitle management of movies and TV |
indexers.blizzard.systems |
Admin-facing | Prowlarr | *arr indexers |
home.blizzard.systems |
User-facing | Flame | Landing page and index of publicly accessible services |
cookbook.blizzard.systems |
User-facing | Baker's Menagerie | Self-hosted and Menagerie-developed cookbook |
tools.blizzard.systems |
User-facing | ITTools | Software Dev helper tools |
ci.blizzard.systems |
User-facing | WoodpeckerCI | Continuous Integration and Deployment for Forgejo |
git.blizzard.systems |
User-facing with admin controls | Forgejo | Git code forge |
audiobook.blizzard.systems |
User-facing with admin controls | Audiobookshelf | audiobook, ebook, and podcast bookshelf |
jellyfin.blizzard.systems |
User-facing with admin controls | Jellyfin | Media streaming platform |
feed.blizzard.systems |
User-facing with admin controls | Miniflux | RSS feed reader and archiver |
qbt.blizzard.systems |
Admin-facing | QBittorrent | Download client |
cockpit.blizzard.systems |
Admin-facing | Cockpit | Server management interface |
Back-End Management
snowglobe
Update Scheme
snowglobe runs the Debian Linux distribution.
This distro follows the point release design model, which means that the distribution provides packages and guarantees their usability with a roughly stable API for the lifetime of the release.
Based on previous release information, major Debian releases are cut every 2 years, and supported for 3 years (Or 5 years including exclusively security patches).
As a potential downside to Debian, while the kernel does support live patching, Debian does not have a supported model for using this functionality.
There are other distributions with this support, however most are Enterprise Linux distributions such as RedHat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLE) which require a subscription (in return for additional security and support guarantees).
To my knowledge, the only non-enterprise Linux distribution that supports kernel-patching is Ubuntu, through it's own subscription Ubuntu Pro service.
This makes it an ideal distribution for low-maintenance servers such as snowglobe.
Individual packages, where possible, are updated live in the background, with the help of the unattended-upgrades package.
Using the preset defaults, this installs valid packages once a day, then checks for package upgrades after installing.
This leads to a 24 hour delay between downloading the packages, and installing them, and is intentional.
However, some packages such as the kernel, ZFS, and Docker, require a reboot to be installed and used.
ZFS and Docker are both services that integrate very tightly with the kernel, and as such are loaded early in the boot process.
Unloading these modules to load the new versions is a manually intensive process, and is thus not recommended by this document.
Point releases are released every 2 months. Packages in these point releases may need to be manually upgraded. Additionally, Docker is distributed in a unique repository, and as such updates are pushed at a rate determined by Docker Inc. However, these updates are not strictly necessary, and can be run along with the aforementioned point releases. Tailscale (a mesh VPN network for emergency out-of-network management activities) is distributed in a similar manner, and can be updated on the same schedule.
Docker Compose Overview
Docker, as you may be aware, is a platform used for containerisation of apps, allowing for separation between programs, reduction of attack surfaces of a given system, and easier dependency management.
Docker has a plugin called Docker Compose, which allows for the systems administrator to write container structure and configuration information into a standardized YAML file, and will be loaded on boot (optionally, by the administrator's configuration).
This is used to describe the vast majority of the services hosted on snowglobe today.
These files are in the git repository that this wiki is attached to.
Below are some commonly used directives and structures, and their meanings.
For more information on Docker Compose, see the official documentation.
Note that there should be only one instance of each directive in each section.
For example, in the section below, a true docker-compose.yml should only have one build directive in the container_service_name section.
Multiple instances here are used to give examples of possible syntaxes.
# Required base directive for defining containers
services:
# This can be anything; container names are declared further down
service_name:
# Oh hey, the container name!
container_name:
# A container build may be written simply as a build directive, and a URL to get the Dockerfile from...
build: https://git.blizzard.systems/github-mirrors/audiobookshelf.git
# ...or with a more complete description, pointing to a given Dockerfile
build:
context: https://git.blizzard.systems/github-mirrors/immich.git
dockerfile: server/Dockerfile
# The build argument can also include a tag for a git repo, such as a version tag
build: https://git.blizzard.systems/github-mirrors/audiobookshelf.git#v2.27.0
# Build directive can be used in conjunction with, or replaced by, an image directive...
image: yacht7/openvpn-client
# ...which can also include version tags...
image: linuxserver/qbittorrent:latest
# ...and where to get the docker container from
image: lscr.io/linuxserver/prowlarr:latest
# These are environment variables; usually used for configuration.
environment:
# Here are some common examples; all of these must be supported by the given container.
# Check your container's documentation for details.
# Used to set the timezone of the container
- TZ=America/Denver
# Used to set the user and group ID the container is running as; useful for resolving various possible permissions issues
- PUID=1001
- PGID=1001
# These are used to define real directories, and their corresponding mount points in a given container
volumes:
# This can be relative...
- ./sonar/config:/config
# ... or absolute...
- /mnt/glacier/TV:/tv
# ... and include custom permissions (Read-only used here)
- /etc/localtime:/etc/localtime:ro
# When to start and restart containers
# In case of a container crash, depending on what this is set to, it may attempt to restart the container
restart: unless-stopped
# Wait for the listed container(s) to start before starting this container
depends_on:
# Note that this value corresponds to the service name, rather than the container name
- service_name
Remote network-wide access
For some specific use-cases, access to services or devices on the local network from offsite is necessary. For these, we use the Tailscale mesh VPN, which allows a user to tunnel all traffic through one device on the local network. The following is a brief procedure for installing and setting up Tailscale:
- Install the Tailscale app. There are mobile apps available, and more information can be found at the Tailscale install page.
- Log into Tailscale with your Github account.
- When prompted, select the Github Organisation as the "tailnet" to log into. In the case of new devices, accessing the admin web-console is necessary to allow enrollment of new devices.
-
- If on Windows: the Tailscale application lives in a system tray icon; right click on the icon, click Connect, then set the exit-node to "snowglobe3", and make sure that "Allow LAN Access" is enabled. You may also need to open PowerShell and run the command
tailscale set --accept-routes. - If on Linux: Run the command
sudo tailscale up --accept-routes --exit-node=snowglobe3 - If on Android: Tap the toggle icon in the top left of the screen, or tap the "connect" button. Then, make sure that the "Exit node" is set to "snowglobe3", and the exit node is enabled.
- If on Windows: the Tailscale application lives in a system tray icon; right click on the icon, click Connect, then set the exit-node to "snowglobe3", and make sure that "Allow LAN Access" is enabled. You may also need to open PowerShell and run the command