283 lines
No EOL
9.3 KiB
Text
283 lines
No EOL
9.3 KiB
Text
---
|
|
title: static_sites_rework log[001]
|
|
description: getting automated deployments back in the dollhive
|
|
---
|
|
|
|
&static sites rework (14MAY2025) ^a+n
|
|
|
|
so like this site,, its a bunch of .doll files, compiled down to .html.
|
|
|
|
it has other sites too, like noe.sh, which are a small blob of .html, .js, and .css.
|
|
|
|
and also 3d.noe.sh, which is a nightmare transpiled app, but its still outputting .html, .js, and .css.
|
|
|
|
it used to be pretty into cloudflare pages and vercel, both automated deployment platforms
|
|
that solve the same problems we're looking to solve.
|
|
|
|
but, obviously these are corpo trash. what are they doing that we can't do?
|
|
|
|
that's right. [em:nothing]
|
|
|
|
&the original architecture
|
|
|
|
so we've had a bunch of static sites already being hosted. we'll focus on blood.pet specifically.
|
|
|
|
as a user, the static-sites VM is what serves blood.pet
|
|
|
|
[invoke(mermaid)(-x 2 -y 2 -p 0)::
|
|
flowchart LR
|
|
user --> ingress-proxy --> static-sites --> nix store
|
|
]
|
|
|
|
as a doll, deployment was a 3 step process
|
|
|
|
[invoke(mermaid)(-x 2 -y 2 -p 0)::
|
|
flowchart LR
|
|
commit --> flake update --> deploy static-sites
|
|
]
|
|
|
|
this also meant if its wife wanted to update its own site, wife would need to bother
|
|
this doll, and hope the adhd doesn't kick in.
|
|
|
|
the node itself looks like this at this point
|
|
|
|
[invoke(mermaid)(-x 2 -y 2 -p 0)::
|
|
flowchart TD
|
|
https://blood.pet -->|static-sites| nginx
|
|
nginx --> nix-store
|
|
deploy -->|nixos-rebuild| nix-store
|
|
]
|
|
|
|
|
|
&the corporate carcinization of sin
|
|
|
|
ok so if it was at work, this is all unacceptable.
|
|
|
|
we'd be using a CI/CD system and upload assets to S3 and the
|
|
S3 would maybe have CloudFront or S3 Website mode in front...
|
|
|
|
cloudflare pages is all of this; they have a CI-like tool and
|
|
that uploads to their S3 store and their CDN handles the fun.
|
|
[em:it's literally the same_]
|
|
|
|
ok so fuck AWS parts of that, lets replicate each bit.
|
|
|
|
- for the bucket, we have [link(https://min.io):minio]. it acts like S3. perfect.
|
|
- for CI/CD, we have a lot of options. we boiled it down to these two:
|
|
- [link(https://forgejo.org/docs/next/user/actions/):forgejo actions]
|
|
- the "actions" ecosystem is okay but we wanted to cut down on runtime dependencies.
|
|
- [link(https://woodpecker-ci.org/):woodpecker CI]
|
|
- we chose woodpecker. it is similar to "actions" but offers more direct control, we like this.
|
|
- we already have nginx to be our "front of house", so.. yay
|
|
|
|
&the platform-y parts
|
|
|
|
we deployed minio with a shockingly simple
|
|
|
|
[codeblock::
|
|
services.minio = {
|
|
enable = true;
|
|
rootCredentialsFile = /secrets/yay; # it uses sops-nix here
|
|
};
|
|
]
|
|
|
|
this deploys minio to S3 port :9000, and web UI to port :9001. cake.
|
|
|
|
we set up our blood.pet bucket (remember to allow anonymous reads to *)
|
|
|
|
so nginx and minio need to talk to each other. we decided to keep these on the same machine, as
|
|
minio will not be serving public traffic at all directly. (and we don't want it to.)
|
|
|
|
[em:minio is just really loud on its own]
|
|
|
|
like check this; this is what a curl to blood.pet/index.html looks like if minio serves it.
|
|
|
|
[//:this would be a codeblock; but it wanted to have the bold for effect]
|
|
[invoke(cat)::
|
|
<pre class="doll-code-block">
|
|
HTTP/1.1 200 OK
|
|
<b>Accept-Ranges:</b> bytes
|
|
<b>Content-Length:</b> 25956
|
|
<b>Content-Type:</b> text/html
|
|
<b>ETag:</b> "e00f624fab315d9e804999ab2994f16c"
|
|
<b>Last-Modified:</b> Wed, 14 May 2025 07:20:42 GMT
|
|
<b>Server:</b> MinIO
|
|
<b>Strict-Transport-Security:</b> max-age=31536000; includeSubDomains
|
|
<b>Vary:</b> Origin
|
|
<b>Vary:</b> Accept-Encoding
|
|
<b>X-Amz-Bucket-Region:</b> us-east-1
|
|
<b>X-Amz-Id-2:</b> dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8
|
|
<b>X-Amz-Request-Id:</b> 183F92F40CF892BF
|
|
<b>X-Content-Type-Options:</b> nosniff
|
|
<b>X-Ratelimit-Limit:</b> 583
|
|
<b>X-Ratelimit-Remaining:</b> 583
|
|
<b>X-Xss-Protection:</b> 1; mode=block
|
|
<b>x-amz-meta-s3cmd-attrs:</b> atime:1747207238/ctime:1747207238/gid:0/gname:root/md5:e00f624fab315d9e804999ab2994f16c/mode:33060/mtime:1/uid:0/uname:root
|
|
<b>Date:</b> Thu, 15 May 2025 02:39:17 GMT
|
|
</pre>
|
|
]
|
|
|
|
the headers include everything from the s3cmd user (root because docker) to random AWS IDs. <insert very unhappy face>
|
|
|
|
this is where our bestie nginx comes in.
|
|
|
|
[em:(and yo just so its clear, we use nix a lot in the dollhive.)]
|
|
[codeblock::
|
|
services.nginx.virtualHosts."blood.pet" = {
|
|
locations."/" = {
|
|
recommendedProxySettings = true;
|
|
proxyPass = "http://127.0.0.1:9000/blood.pet/"; # trailing slash Required.
|
|
extraConfig = ''
|
|
# try to catch errors
|
|
proxy_intercept_errors on;
|
|
|
|
# minio why so fingerprintable....
|
|
proxy_hide_header x-amz-request-id;
|
|
proxy_hide_header x-amz-bucket-region;
|
|
proxy_hide_header x-amz-id-2;
|
|
proxy_hide_header x-amz-meta-s3cmd-attrs;
|
|
proxy_hide_header x-ratelimit-limit;
|
|
proxy_hide_header x-ratelimit-remaining;
|
|
proxy_hide_header x-minio-deployment-id;
|
|
proxy_hide_header strict-transport-security;
|
|
proxy_hide_header x-firefox-spdy;
|
|
proxy_hide_header x-xss-protection;
|
|
proxy_hide_header x-content-type-options;
|
|
proxy_hide_header vary;
|
|
|
|
# prevent minio fingerprinting back...
|
|
proxy_set_header user-agent "";
|
|
|
|
# and especially dont POST its server omg
|
|
proxy_method GET;
|
|
|
|
# fix example: https://blood.pet/ to request /blood.pet/index.html
|
|
# fix example: https://blood.pet/pronouns/ to request /blood.pet/pronouns/index.html
|
|
rewrite (.*)/$ $1/index.html;
|
|
'';
|
|
};
|
|
};
|
|
]
|
|
|
|
so we can just kinda yeet all those out.
|
|
|
|
nginx now lives as our "CDN" layer. nice. its job is to do all the internal-to-external bridging to minio.
|
|
|
|
&a bird in the hand...
|
|
|
|
woodpecker is a different story; we want this in its own machine, separate and safe away from
|
|
the common dangers of the Proxmox cluster or even just the internet.
|
|
|
|
we use forgejo so we'll also cover configuration for that.
|
|
|
|
[codeblock::
|
|
# just so its there when we SSH
|
|
environment.systemPackages = [
|
|
pkgs.woodpecker-cli
|
|
];
|
|
|
|
services.woodpecker-server = {
|
|
enable = true;
|
|
environment = {
|
|
WOODPECKER_HOST = "https://<woodpecker>";
|
|
WOODPECKER_SERVER_ADDR = ":9001";
|
|
WOODPECKER_GRPC_PORT = ":9000";
|
|
WOODPECKER_OPEN = "true";
|
|
WOODPECKER_FORGEJO = "true";
|
|
WOODPECKER_FORGEJO_URL = "https://<forgejo>";
|
|
WOODPECKER_ADMIN = "noe";
|
|
};
|
|
environmentFile = /secrets/yay;
|
|
# get these from <forgejo>/user/settings/applications
|
|
# WOODPECKER_FORGEJO_CLIENT=awawawawa
|
|
# WOODPECKER_FORGEJO_SECRET=dolldolldoll
|
|
};
|
|
|
|
services.woodpecker-agents.agents."podman" = {
|
|
enable = true;
|
|
environment = {
|
|
WOODPECKER_SERVER = "localhost:9000";
|
|
WOODPECKER_BACKEND = "docker";
|
|
WOODPECKER_MAX_WORKFLOWS = "4";
|
|
DOCKER_HOST = "unix:///run/podman/podman.sock";
|
|
WOODPECKER_HEALTHCHECK = "false";
|
|
WOODPECKER_GRPC_SECURE = "false";
|
|
};
|
|
extraGroups = [ "podman" ];
|
|
environmentFile = [ /secrets/yay ];
|
|
# get this from <woodpecker>/admin/agents
|
|
# WOODPECKER_AGENT_SECRET=dolldolldollawawawawawa
|
|
};
|
|
|
|
virtualisation.podman = {
|
|
enable = true;
|
|
defaultNetwork.settings = {
|
|
dns_enabled = true;
|
|
};
|
|
autoPrune = {
|
|
enable = true;
|
|
dates = "daily";
|
|
};
|
|
};
|
|
]
|
|
|
|
and just like that, we have our woodpecker thing going. awawa!!!
|
|
|
|
&the final piece, automation
|
|
|
|
we made it!!!
|
|
|
|
go make an access key in minio, and save those as secrets in woodpecker!
|
|
we named it like [code:static_sites_client] but anything's cool.
|
|
|
|
so our last step is to make a step file, [code:.woodpecker/build-deploy.yaml]
|
|
|
|
[codeblock::
|
|
when:
|
|
- event: push
|
|
branch: main
|
|
|
|
steps:
|
|
- name: build & deploy
|
|
image: nixos/nix
|
|
commands:
|
|
- echo 'experimental-features = flakes nix-command' >> /etc/nix/nix.conf
|
|
- nix build -L .
|
|
- |
|
|
nix-shell -p s3cmd --command 's3cmd \
|
|
--host=static-sites:9000 \
|
|
--host-bucket=static-sites:9000 \
|
|
--no-ssl \
|
|
sync result/* s3://blood.pet'
|
|
environment:
|
|
AWS_ACCESS_KEY_ID:
|
|
from_secret: static_sites_client
|
|
AWS_SECRET_ACCESS_KEY:
|
|
from_secret: static_sites_secret
|
|
]
|
|
|
|
what's going on here?
|
|
= we tell nix it should know what a "flake" is
|
|
= we build it (with -L for live logs)
|
|
= we use s3cmd to upload it
|
|
- we set `--host` and `--host-bucket` to the same thing, this isn't S3
|
|
- disable SSL because haha doll
|
|
|
|
&ad extremum
|
|
|
|
so our server architecture now looks like
|
|
|
|
[invoke(mermaid)(-x 2 -y 2 -p 0)::
|
|
flowchart TD
|
|
https://blood.pet -->|static-sites| nginx
|
|
nginx -->|http| minio
|
|
git commit --> woodpecker
|
|
woodpecker -->|s3cmd| minio
|
|
]
|
|
|
|
also you're looking at this site, all deployed with the above :>
|
|
|
|
&links
|
|
- blood.pet on git: [link(https://git.sapphic.engineer/noe/blood.pet/):noe/blood.pet]
|
|
- static-sites config: [link(https://git.sapphic.engineer/noe/nixos/src/branch/main/nixos/hosts/static-sites):noe/nixos:/nixos/hosts/static-sites]
|
|
- woodpecker config: [link(https://git.sapphic.engineer/noe/nixos/src/branch/main/nixos/hosts/woodpecker/default.nix):noe/nixos:/nixos/hosts/woodpecker/default.nix] |