How This Website Is Overengineered

This website took heavy inspiration from Xe’s blog. I strongly encourage you to check that out as Xe did a great talk on it!

I just want to state this here first, the reason why I made my website like this is that I wanted to learn Rust, and wanted to give myself the ability to add more dynamic elements to it in the future without using javascript. There are much easier, better documented, less time consuming ways to do this via a static site generator. There are a lot of static site generators out there, there probably is one written in your favorite programming language. I highly recommend using a static site generator if you wish to host a blog.

Rust

The content parts of my website, like the news list, project pages, blog, contact info, and the short bio on the homepage, are all written in markdown documents. My website, on startup, then reads all the markdown files, parses them into html, and then places them into a Rust Vec. When the user request a webpage, the website fetches the content from the Vec, and then plugs them into the webpage template, then sends them off towards the web client.

I use tokio for my async runtime only because it is well-documented and beginner-friendly. The same folks behind tokio also created axum, the web framework used for this project. This website is also my first Rust project. I forced myself to write it in Rust because I needed to motivate myself to learn Rust, and I think it went pretty well in the end. I learned a lot about Rust programs, the borrow checker that I both love and hate, and its error handling that I still don’t know whether I like or not (over, say, Golang’s Error type) But overall it was a fun experience writing this website, and the infrastructure behind it, as described later.

Dynamic CSS Fetching

My website’s CSS isn’t all that complicated. I focused on using the Nord color scheme for my website (as it’s become somewhat of a personal color scheme at this point) and simply wrote the remaining CSS by hand. The responsitivity and the navigation bar uses Yahoo’s pure-css and its responsive grids.

Pages on this site load two stylesheets: the common (shared) stylesheet, as well as its page-specific stylesheet. The shared stylesheet and the page-specific stylesheet are consisted of multiple stylesheets on the backend that are combined at compiletime. Similarly to the website content, on startup, my website does the following:

  1. reads all relevant stylesheets
  2. minifies the stylesheet via the minifier crate
  3. (and on requests) concatenates the relevant CSS files together and sends them to the client.

Earlier in development, the backend would concatenate ALL relevant stylesheets (both page-specific and shared) into a single stylesheet to be sent to the client. This caused the web browser to not cache the background, and every time when I navigate to a different page, the background flashes white for a second as it needs to refetch the stylesheets. Now with the dual-file arrangement and cache control, I am able to save (admittidely, a very little amount of) bandwidth while still preventing my website from flashbanging me on every reload.

Infrastructure

I own and operate my own infrastructure, hardware and IP block which hosts the backend. I could use a third party CDN to make load times faster, especially in non-U.S. regions, which would have the added benefit of higher reliability. However, I’d rather my website be 100% under my control, and the small latency is really negligible since the website is already lightweight. I hope that some day I’ll run a global anycasted CDN, and move my website over. But until then, my website will live on EricNet, California.

Deployment

This blog, currently, is deployed by two docker containers, and there is one [watchtower] instance controlling each docker container. watchtower is configured in API mode, waiting for webhook requests to update its respective docker container.

The two docker containers are reverse proxied by Nginx, with load balancing and failover. This simple setup ensures that load is distributed between the two docker containers. Most importantly, if one container crashed or is being updated, the other one can handle the requests.

Upon pushing to the main branch, Github Actions will do the following steps:

  1. Build the docker container for the website.
  2. Send a webhook to wt0 (watchtower instance #1), triggering the first instance of the container to upgrade.
  3. Check, via curl, the X-Server-Select header and grep to see if the website is updated on the first container (www0), checking whether the hash in the footer of the website matches the commit hash of the current CI build. If any of these steps fail, the pipeline will stop immediately. Step 2 and 3 is then repeated for the second website instance. Lastly, Github Actions will ping Google with the updated sitemap of the recently-deployed website.

A graph describing my website backend

The Future

As of this blog post, I hope to explore:

  1. Adding webmention support for the blog
  2. Adding something to track pageviews, exposing to Prometheus
  3. and a Prometheus exporter that I can then integrate into Grafana, pulling pageviews from both containers
  4. HTML/CSS validation & Markdown linting
  5. More tests

Though I want to add more dynamic elements to this website, perhaps a comment section, I’m not sure… It’s hard to balance maintaining a professional feel and adding fun features like live Discord/Spotify activity integration.

This blog post was edited by Jack Dorland. Thank you!


If you have any questions, want to change my mind, or literally anything else, please reach out!