go back to the home page

clover's log

2026-01-12

tags: [album]

i started more music work. i've gotten better at lyric writing, phrasing this new song as a sort of "adventure". felt for one of the first times that i was doing worldbuilding in a song. the imagery is that good.

2026-01-11

tags: progress.ts, todo tracker

i finished streaming io on the @clo/lib/progress.ts. very proud of it. my git commits describe the tech better than me reiterating.

feat(lib/progress): implement streaming wire protocol

resolves #47

encodeByteStream converts these events into a ReadableStream. by batching events together, the stream contents remain small, that way the code that constructs progress nodes do not have to worry about calling many setters at once, it gets debounced be the serializer. stream backpressure causes larger time-gaps to be batched (smaller). this enables servers to respond with rich progress.

const root = new progress.Root();
doActionWithProgress(root).then(root.end, root.error);
// streaming clients indicate a header
if (req.headers.get("Accept")?.includes(progress.contentType))
  return new Response(progress.encodeByteStream(root), {
    headers: { 'Content-Type': progress.contentType },
  });
// to support non-streaming clients
return Response.json(await root.asPromise());

and decodeByteStream on the client:

const output = document.getElementById("output");
const res = await fetch(...);
if (!res.ok) throw ...;
const root = new progress.Root();
root.on("change", (active) => {
  output.innerText = ansi.strip(progress.formatAnsi(
    performance.now(),
    active,
  ));
});
const result = await progress.decodeByteStream(res.body, root);
output.innerText = JSON.stringify(result);

there is currently no document bindings, but i plan to. additionally, a React hook is very trivial to implement for this -- but that is unplanned for this repository. for transports that require JSON or UTF-8, there is encodeEventStream which returns a ReadableStream of JSON objects which can be compressed at the developer's discretion.

feat(lib/progress): headless rendering + time estimation

node signaling is done by providing a progress.Root to every node, dispatching events to it when the node changes. the root is connected to an observer to construct a UI out of it. there are two apis planned:

additionally, resolves #33 by implementing estimatedTime

i also did a large part of the work to create a "code todo tracking" tool. i would say it's about half done, since the second half is simply fixing all of the little bugs there are. most of this code is currently ai-generated, but with me manually coming in to write interfaces and the modular program architecture. then i synthesize the code and the tests. this was basically just going on ambiently while progress.ts was in progress.

2026-01-09

tags: home infra

finished SSO sub-project. im happy with the setup i used to protect internal services, such as pgadmin and qbittorrent. it's a caddy snippet that i can re-use very easily.

(reverse_proxy_auth) {
  handle /snow.oauth2/* {
		reverse_proxy "http://forward-auth" {
			header_up X-Real-IP {remote_host}
			header_up X-Forwarded-Uri {uri}
		}
	}
  handle {
    forward_auth "http://forward-auth" {
      uri /snow.oauth2/auth
      header_up X-Real-IP {remote_host}
      @error status 401
      handle_response @error {
        redir * /snow.oauth2/sign_in?rd={scheme}://{host}{uri}
      }
      @valid_group header X-Auth-Request-Groups *role:{args[1]}*
      handle_response @valid_group {
        method  {method}
        rewrite {uri}
        reverse_proxy {args[0]} {
          header_up Cookie ([^;]*?)\s*_oauth2_proxy_\d=[^;]*(;?.*) "$1$2"
          {block}
        }
      }
      handle_response {
        rewrite /403.html
        file_server {
          status 403
          root /etc/caddy
        }
      }
    }
  }
}

# usage
pg.{$HOME_DOMAIN} {
  import reverse_proxy_auth "http://pgadmin" admin
}
qbt.{$HOME_DOMAIN} {
  import reverse_proxy_auth "http://qbittorrent" media-manage
}

2026-01-04

tags: home infra

working on SSO for my internal services. for context, i have about 12 self-hosted services running, half of which i allow my friends to access. currently, this is done through manually creating an account on such service (jellyfin, forgejo), but many are done through a caddy rule. in the interest of making my password manager less confused (ip vs domain, subdomain etc), i'm slowly reducing this setup to a single sign in page.

to do this, i am using https://keycloak.org, which supports openid connect (how i will configure forgejo and jellyfin), as well as a separate service to provide forward auth proxying (how i protect services like copyparty, syncthing, pgadmin, and many more). i tried authelia beforehand, but i really do not recommend them due to how hard it is to configure, passkeys being annoying to setup, and limited themes. i also dont recommend authentik, but i couldnt figure out how to even start using it after i installed it.

keycloak is a bit stupid on config. as all the config lies in the postgres database, i can't use a config file to setup the primary realm. so instead, i have this huge python script to use the API to upsert the configuration in. this works pretty well, and means that for locally running the infrastructure for testing, i can get the config to be the same (useful if you brick keycloak, which is pretty easy to do).

2026-01-02

tags: home infra, git, name paint bot show

i deleted all my github repositories except four: my "readme", a bug reproduction repo, the mirror for ts lie detector, and a shared private repo with someone that is load bearing. in this process, i've moved all the projects to my forgejo instance.

with this, name paint bot, one of my few remaining projects that is still active, moves to that forgejo instance using their github migrator. some of my private projects, like my pet scripting language, were migrated as well. it feels more alive on my site because of the theming and per-repo icons.

after a year of forgejo, i am really happy with how it treats me.