F1RST P0ST!!1/About Building this Website
March 1st, 2026
This particular meme might be over a decade old by this point, but its dated nature seems to fit the general aesthetic of this website. My apologies, by the way. I've never really been a visual person. Anyways, welcome to my blog/website! In this first blog post, I want to talk about all the things that I "had" to do in order to actually get this blog up and running.
Realistically, I did not have to take as long as I did to get this blog set up. I have had a running instance of nginx on my server for what seems like months. The only real boundaries to getting this site running were learning enough CSS to make the site look functional, and writing content for the site. I did both of these things essentially in two days, meaning that all things considered they we're not the most difficult boundaries to scale. Instead, the barrier I had to overcome was the barrier of infrastructure.
One problem that I've encountered in almost every technical project I've done where I try to implement a system is a failure to understand the scope and nature of the system before I set out developing it. For example: when I first started putting music on my media server, I got so excited about prospect that I didn't give a thought to the system by which I would name and organize the files that I was putting on the server. This mistake caused a bunch of work for me later on, when I realized that all of those files would need to be named in a certain format in order to have metadata automatically applied to the media. Similar problems are frequently encountered in coding as well, where decisions early on in a project, when not properly systematized, can cascade into a world of pain later on in the project.
Now, on it's own, the website does not represent much of a project. It is simply a collection of html and css files in a filesystem given to a static content server, stuff people have been doing since the early 90s. But this website does not exist in a vacuum. Instead, it represent the highest point of abstraction for my entire system; the "end" of a long series of abstractions, each of which is a complex system affecting the layers of abstraction higher than it. A website is then, in effect, the end product of a massive system of systems. What are some of these systems? Networking protocols, docker containers, storage arrays, development environments, and more. Relating this back to my earlier point, it's hard for me to feel comfortable developing the "end" of a system when I haven't yet figured out the foundation of that system. I am building a foundation for this website, and a lack of understanding of any part of that foundation could lead to complications further down the line, even for something as simple as a website
The first thing I had to figure out was the method by which I'd be serving content. In the early days of this project, I bounced between several different options. One potential was exposing part of my trilium notes to the public through some of the functionality built into that application, but I found no elegant solution with that method that preserved the security of my notes. Another consideration was using an open source application called Hugo to host markdown files. Unfortunately, Hugo did not offer the degree of control I required to host a full functioning website. I also briefly considered using an application called Ghost, but this option required too many external dependencies (as an aside, I've found that "beginner friendly" options for a number of services tend to cause headaches in the long run in their efforts to hide complexity from the user). I eventually settled on using nginx, as it was the simplest way to host static content, but unfortunately required me to properly learn how to build a website using HTML and CSS.
The other issue I had already been grappling with was the issue of networking. The server's internal networking was pretty stable, nginx simply took external traffic from ports 80 and 443, and, depending on the subdomain, directed that traffic to an internal port where a docker container was hosted. The point of complexity turned out to be grappling with DNS. In the days of static ip addresses, the issue of pointing DNS traffic at your server was simply a matter of pointing an A Record at your static ip. When I first set up my server, I was fortunate enough to live in a place that used a static IP address. However, by the time I was considering this project, I had moved places to a new location which used a dynamic ip address. This move had fortunately not affected a majority of my services, as they used CNAME records to direct dns traffic. But I was naturally interested in hosting this website at the root of my domain (kawiggles.com), something which was not feasible through an A record if the target ip address was consistently changing. The answer to this problem turned out to lie in the router I was using and ALIAS records. The new router which I had purchased during the move had the capability of hosing a Dynamic DNS server, which would allow domain name resolution independent of the ip address set by my internet provider. And while a typical A record cannot resolve to a ddns server address, and ALIAS record can pretend to be the root of the domain while pointing at such an address. Learning about these two concepts allowed for me to bypass the networking problem.
The most difficult and amorphous challenge which I had to solve was the question of a proper development environment. When I first began to pursue this project, I had not written any serious code, especially not independent of web based tools. While I knew it was absolutely possible for me to write HTML in notepad, the process was slow, inefficient, and error prone. And so this project was on standby until I began to pursue coding beyond just basic web development. There were two issues which composed this problem: how could I easily edit the source files for the site, and which editor should I use. My first solution to these two problems was to use Microsoft's VSCode, a standard development environment, to write html and css to a filesystem hosted on my nas. This solution was a perfectly workable solution, but I encountered issues after a few months of use. Firstly, it is difficult to use either Windows Powershell or VSCode's included terminal to make changes to a Linux filesystem mounted on a device running the Windows Operating System. Compatibility between Linux and Windows was enough of a point of frustration that I ended up moving every one of my computers to Linux. Now came another point of awkwardness: using Windows software on a Linux system. Of course, an open-source version of vscode is available, but the entire system opposed the philosophy which I was trying to implement in my switch to Linux: total transparency and control over what was being used on my computer. Downloading extensions feels clunky when a package manager like pacman is one terminal away. And why ever use VSCode's terminal when the Linux terminal is more direct? Because of this, I found myself gravitating towards using vim. Eventually I came to discover that the functionality of VSCode's perser generator, autocompletion, and LSP could be replicated using neovim and plugins. Setting up neovim took a considerable amount of time, but in the end the bare simplicity of the application and its universal compatibility with very little new setup alongside its powerful functionality makes it rewarding to use. The last issue to solve was figuring out how to more directly edit my websites files. Using the aforementioned NAS method, I was still required to make copies of the website directory inside the appdata directory for nginx. This was a slow process that could be prone to error, as each update of the website, not matter how small, would involve deleting and replacing the old files. At first I attempted to solve this problem by soft linking the nas directory to the nginx appdata directory. This unfortunately had issues, as a difference of permission between the two directories caused complications, with the nginx container being unable to use the symlink. The more direct option was instead altering the docker run command for the container, mapping the docker container's internal path to the static content to the NAS directory. Finally, because the issue of redundancy and security still persisted, I learned how to use github. A git repository for this website is available on my github profile.
Was this too much for just deploying a website? Perhaps, but I think the knowledge I gained in the process is infinitely more valuable than the product of the website itself. I've been learning to expect that learning projects of this nature will always take more way more time than I expect. But what I've also found is that understanding the full infrastructure behind a project makes building future projects of that kind much easier. In a certain sense, the only reason why it took so long was because it existed so far up the technology stack, and what really consumed my time was working my way up that technology stack. My hope is that this kind of understanding will also apply to other learning projects. For instance, I'm hoping that the time I've spent in neovim during the development of this website will assist me in the development of my actual coding projects. I suppose the moral of my story is that understanding systems as a whole is important, but I don't think that's universally true, and I don't like stories with morals anyways. Take what you will from my anecdotes. And thank you for reading!