go back to the home page

clover’s log

these are like mini blog posts, but more in generally just what the heck i’m up to.

2026-02-06

spent yesterday and today cathing up with this page and preparing to get writing heavy on the blog. i also fixed a bug on the video player where it wouldnt work on latest chrome on non-av1 accelerated hardware.

there were two bugs:

since i was testing windows 7, decided to also fix some IE9 bugs. the page works besides the video player and the fonts, which is all things i believe i could easily fix, so i opened new issues for those tasks.

2026-02-04

tags: this site

i did a ton of random stuff to this site, including the RSS feed and random bug fixes on the domain. trying to reduce the issue count on my forgejo.

2026-02-03

tags: history of japan reanimated

immediately after the youtube premiere, i worked on my webpage of it. the live credits was a really fun touch that really completes things. i’m very happy with the final result.

i also had to do a ton of work fixing the video player to properly show thumbnails on the video player. the biggest one was adding even more hell to the dash-av1 player. a issue that dash.js has is providing the first frame of media before the user presses “play.” this doesn’t happen with the hls or native players. so my workaround… delay attachment of the view.

player.initialize(null, dashFile, false);
player.updateSettings({
  streaming: { cacheInitSegments: true },
});
player.preload();

video.addEventListener("play", function play() {
  video.removeEventListener("play", play);

  enableSafariAirplay(); 
  player.attachView(video);
  onCloverVideoInit?.(id, video);
});
video.controls = true;

but on chrome you get cooked because a video without a source is not playable. so a second hack used to attach a fake source.

const duration = new Promise<number>((resolve) => {
  player.on(dashjs.MediaPlayer.events.MANIFEST_LOADED, (e) => {
    const duration = e.data.mediaPresentationDuration;
    resolve(duration); 
  });
});
mediaSource = new MediaSource();
mediaSource.addEventListener("sourceopen", () => {
  duration.then((duration) => {
    mediaSource = mediaSource!; 
    mediaSource.duration = duration;
    const sb = mediaSource.addSourceBuffer('video/webm; codecs="vp8"');
    const oneFrame = Uint8Array.from(
      atob("GkXfo59ChoEBQveBAULygQRC84EIQoKEd2VibUKHgQJChYECGFOAZwEAAAAAAAITEU2bdLpNu4tTq4QVSalmU6yBoU27i1OrhBZUrmtTrIHWTbuMU6uEElTDZ1OsggEjTbuMU6uEHFO7a1OsggH97AEAAAAAAABZ" + "A".repeat(119) + "VSalmsCrXsYMPQkBNgIxMYXZmNjIuMy4xMDBXQYxMYXZmNjIuMy4xMDBEiYhAXgAAAAAAABZUrmvIrgEAAAAAAAA/14EBc8WIDV7YG1KxA/CcgQAitZyDdW5kiIEAhoVWX1ZQOIOBASPjg4QCYloA4JCwgQG6gQGagQJVsIRVuYEBElTDZ/tzc59jwIBnyJlFo4dFTkNPREVSRIeMTGF2ZjYyLjMuMTAwc3PWY8CLY8WIDV7YG1KxA/BnyKFFo4dFTkNPREVSRIeUTGF2YzYyLjExLjEwMCBsaWJ2cHhnyKFFo4hEVVJBVElPTkSHkzAwOjAwOjAwLjEyMDAwMDAwMAAfQ7Z11eeBAKOigQAAgBACAJ0BKgEAAQALxwiFhYiFhIg/ggAMDWAA/ua1AKOVgQAoALEBAC8R/AAYABhYL/QAJAAAo5WBAFAAsQEALxH8ABgAGFgv9AAkAAAcU7trkbuPs4EAt4r3gQHxggGj8IED"),
      (x) => x.charCodeAt(0),
    );
    sb.appendBuffer(oneFrame);
  });
});
objectUrl = URL.createObjectURL(mediaSource);

mainSource = document.createElement("source");
mainSource.src = objectUrl;
video.appendChild(mainSource);

the payload within contains a single webm vp8 frame, one pixel by one pixel. this is enough to get chrome to shut up. now the experience is nearly perfect, with browser support all the way to the ancient firefox version on my old laptop, to modern Apple Silicon Macs.

2026-01-31

tags: history of japan reanimated

main section of animation for the project was already done, but i wanted to make some needed adjustments to my scene. so i had been doing those for a few days, as well as animating an extra 25 seconds for the outro section. it was very fun doing that process, since i actually grabbed an old macbook i had in storage to take the textedit screenshot. more details on this entire process will be in the project page.

2026-01-30

tags: react mutation

at work, one of my coworkers was complaining about my helper functions for TanStack Query’s mutation system. the conclusion was that the mutation system we were building on was flawed and not enjoyable to use. so i spent a few days making an alternative library. it’s on the JSR: @clo/react-mutation.

this project was really fun because of how much the API surface changed as i started staging the library into our actual code. the experience has shaped how i want to go about my upcoming blog post for the better.

2026-01-25

tags: ts lie detector, todo tracker

added a trivial binary to the lie detector, tsld-node, which is like ts-node or tsx but it runs with the lie detector.

for todo tracker, it’s been vibe coded to a point where the sitegen repo gets through all commits, but there are still some issues with missing TODOs. i haven’t really had time to prioritize this, even with an ai agent writing most of it, since even then i have to review the progress of it, so it’s just not worth my time until other projects pull through.

2026-01-24

tags: git, home infra

i moved forgejo off of sqlite and onto postgres. the only motiviation behind this was to easily backdate the repositories i had imported: name paint bot and a scripting language compiler i wrote. i was more comfortable doing this on a real database server than just editing the file.

the migration took a bit for me to figure out, but last year someone named Sven did the same thing, and found this pgloader command with the critical data only clause.

echo "LOAD DATABASE
FROM sqlite:///root/forgejo.db
INTO postgresql://forgejo:$POSTGRES_PASSWORD_FORGEJO@postgres/forgejo
WITH data only, reset sequences, prefetch rows = 10000
SET work_mem TO '16MB', maintenance_work_mem TO '512MB';" > ./pgloader-command

pgloader ./pgloader-command

2026-01-18

tags: git, home infra

i finally setup system integrated ssh push, but with a twist.

the first problem is having two ssh servers on the same machine, one for the host, and another for Forgejo. this means that one of them had to live on another port, so i chose to move the host. but it kind of just sucks to use this. the proper solution is to use a custom config to direct the git user to the right place.

what made this harder is i actually had two git instances; the second one is for a temporary infrastructure i’m running for evil inc until the organization gets dedicated hardware (more about this in a future post). so even if i routed git specially, it still wouldnt know which git server to go to.

the solution: write a custom “router” script that generates an authorized_keys file based on which git instances actually have a key, then the line within the authorized_keys forces a special wrapper which intercepts the targetted git repository, routing it to the git instance with it.

the sshd config looks like

Match User git
    AuthorizedKeysCommand /bin/python /mnt/storage1/apps/home-infra/config/forgejo/ssh/keys.py %u %t %k
    AuthorizedKeysCommandUser git

it is convenient because AuthorizedKeysCommand is run on every connection. it produces a file with zero or one valid keys:

command="/mnt/storage1/apps/home-infra/config/forgejo/ssh/route.sh --clover 1 --evil 2",no-port-forwarding,...,restrict ssh-ed25519 AAAAC3NzaC...

and then the route script does this shit:

if [ -d "/mnt/storage1/apps/forgejo/git/repositories/${repo}" ]; then
  [ -z "$clover_id" ] && echo "ssh key is not configured for repository on git.paperclover.net" >&2 && exit 1
  exec sudo docker exec -i -u git forgejo /usr/bin/env SSH_ORIGINAL_COMMAND="$SSH_ORIGINAL_COMMAND" /usr/local/bin/forgejo --config=/custom/conf/app.ini serv key-"$clover_id"
elif [ -d "/mnt/storage1/apps/evil-infra/forgejo/git/repositories/${repo}" ]; then
  [ -z "$evil_id" ] && echo "ssh key is not configured for repository on git.evil.inc" >&2 && exit 1
  exec sudo docker exec -i -u git evil-forgejo /usr/bin/env SSH_ORIGINAL_COMMAND="$SSH_ORIGINAL_COMMAND" /usr/local/bin/forgejo --config=/custom/conf/app.ini serv key-"$evil_id"
else
  echo "repo not found" >&2
  exit 1
fi

it works beautifully. see the whole patch for more info

2026-01-14

tags: next.js blog post, this site

general housekeeping. getting this activity page up. getting the spanish translation by my good friend trubiso up. things are looking really cozy.

2026-01-12

tags: [album]

i started more music work. i’ve gotten better at lyric writing, phrasing this new song as a sort of “adventure”. felt for one of the first times that i was doing worldbuilding in a song. the imagery is that good.

2026-01-11

tags: progress.ts, todo tracker

i finished streaming io on the @clo/lib/progress.ts. very proud of it. my git commits describe the tech better than me reiterating.

feat(lib/progress): implement streaming wire protocol

resolves #47

encodeByteStream converts these events into a ReadableStream. by batching events together, the stream contents remain small, that way the code that constructs progress nodes do not have to worry about calling many setters at once, it gets debounced be the serializer. stream backpressure causes larger time-gaps to be batched (smaller). this enables servers to respond with rich progress.

const root = new progress.Root();
doActionWithProgress(root).then(root.end, root.error);

if (req.headers.get("Accept")?.includes(progress.contentType))
  return new Response(progress.encodeByteStream(root), {
    headers: { 'Content-Type': progress.contentType },
  });

return Response.json(await root.asPromise());

and decodeByteStream on the client:

const output = document.getElementById("output");
const res = await fetch(...);
if (!res.ok) throw ...;
const root = new progress.Root();
root.on("change", (active) => {
  output.innerText = ansi.strip(progress.formatAnsi(
    performance.now(),
    active,
  ));
});
const result = await progress.decodeByteStream(res.body, root);
output.innerText = JSON.stringify(result);

there is currently no document bindings, but i plan to. additionally, a React hook is very trivial to implement for this – but that is unplanned for this repository. for transports that require JSON or UTF-8, there is encodeEventStream which returns a ReadableStream of JSON objects which can be compressed at the developer’s discretion.

feat(lib/progress): headless rendering + time estimation

node signaling is done by providing a progress.Root to every node, dispatching events to it when the node changes. the root is connected to an observer to construct a UI out of it. there are two apis planned:

additionally, resolves #33 by implementing estimatedTime

i also did a large part of the work to create a “code todo tracking” tool. i would say it’s about half done, since the second half is simply fixing all of the little bugs there are. most of this code is currently ai-generated, but with me manually coming in to write interfaces and the modular program architecture. then i synthesize the code and the tests. this was basically just going on ambiently while progress.ts was in progress.

2026-01-09

tags: home infra

finished SSO sub-project. im happy with the setup i used to protect internal services, such as pgadmin and qbittorrent. it’s a caddy snippet that i can re-use very easily.

(reverse_proxy_auth) {
  handle /snow.oauth2/* {
		reverse_proxy "http://forward-auth" {
			header_up X-Real-IP {remote_host}
			header_up X-Forwarded-Uri {uri}
		}
	}
  handle {
    forward_auth "http://forward-auth" {
      uri /snow.oauth2/auth
      header_up X-Real-IP {remote_host}
      @error status 401
      handle_response @error {
        redir * /snow.oauth2/sign_in?rd={scheme}://{host}{uri}
      }
      @valid_group header X-Auth-Request-Groups *role:{args[1]}*
      handle_response @valid_group {
        method  {method}
        rewrite {uri}
        reverse_proxy {args[0]} {
          header_up Cookie ([^;]*?)\s*_oauth2_proxy_\d=[^;]*(;?.*) "$1$2"
          {block}
        }
      }
      handle_response {
        rewrite /403.html
        file_server {
          status 403
          root /etc/caddy
        }
      }
    }
  }
}

# usage
pg.{$HOME_DOMAIN} {
  import reverse_proxy_auth "http://pgadmin" admin
}
qbt.{$HOME_DOMAIN} {
  import reverse_proxy_auth "http://qbittorrent" media-manage
}

2026-01-04

tags: home infra

working on SSO for my internal services. for context, i have about 12 self-hosted services running, half of which i allow my friends to access. currently, this is done through manually creating an account on such service (jellyfin, forgejo), but many are done through a caddy rule. in the interest of making my password manager less confused (ip vs domain, subdomain etc), i’m slowly reducing this setup to a single sign in page.

to do this, i am using keycloak.org, which supports openid connect (how i will configure forgejo and jellyfin), as well as a separate service to provide forward auth proxying (how i protect services like copyparty, syncthing, pgadmin, and many more). i tried authelia beforehand, but i really do not recommend them due to how hard it is to configure, passkeys being annoying to setup, and limited themes. i also dont recommend authentik, but i couldnt figure out how to even start using it after i installed it.

keycloak is a bit stupid on config. as all the config lies in the postgres database, i can’t use a config file to setup the primary realm. so instead, i have this huge python script to use the API to upsert the configuration in. this works pretty well, and means that for locally running the infrastructure for testing, i can get the config to be the same (useful if you brick keycloak, which is pretty easy to do).

2026-01-02

tags: home infra, git, name paint bot show

i deleted all my github repositories except four: my “readme”, a bug reproduction repo, the mirror for ts lie detector, and a shared private repo with someone that is load bearing. in this process, i’ve moved all the projects to my forgejo instance.

with this, name paint bot, one of my few remaining projects that is still active, moves to that forgejo instance using their github migrator. some of my private projects, like my pet scripting language, were migrated as well. it feels more alive on my site because of the theming and per-repo icons.

after a year of forgejo, i am really happy with how it treats me.