A New Beginning and a New Website

2023-02-14
slice of life
product management
devops
kubernetes
helm
elixir
phoenix

Last year I took a bet: I left my secure and well-compensated, albeit highly stressful, head of data job at a mid-size referral marketing company. I joined a newly founded startup and, as you can guess from the title of this post, that exploded in my face: I got fired on a whim 2 weeks before my 6 months probation ended 🤷.

Anyhow, even though thinking about this situation still leaves a pretty bad taste in my mouth, I’d rather look forward than backward. After some thinking, I figured that this is probably the change I needed to finally start something I wanted to do for a very long time: go indie, build simple, but delightful apps, and support myself with consulting occasionally.

Which meant that:

  • I needed to make a website advertising my consulting services
  • I also needed to figure out a way to prototype aforementioned apps quickly
  • Naturally, an endless sequence of bureaucratic dances I needed to do to register a business, setup invoicing, taxes, etc.

I’ll leave the last point for some better time, since it still gives me nightmares - and I’m thankfully mostly done with it for now.

The first 2 points, though, have a pretty nice tech challenge built-in, so let’s dig in.

The Website

The website you’re now on is version #4 of my “home page” of sorts.

Version 1

The first version was a static web page with my (very old now) blog posts. It was a typical Github Pages with Jekyll setup. I hated it, and pretty much never updated it.

Version 2

Last year, after my unfortunate startup experience, I played for a bit with Zola (I did a video about it some time ago). The idea was to stay with Github Pages, but with Zola instead of Jekyll, and with some basic low-code integrations for having a contact form on the landing page and such.

Yet, despite Zola being faster, friendlier, and overall nicer experience than Jekyll, it doesn’t support components, and I was constantly hitting various small, but annoying limitations. Since nowadays I prefer to use Tailwind for my CSS needs, not being able to extract a “button” or a “project card” component easily was making the development experience slow and painful.

Version 3

Thus, I threw that prototype away, and tried another approach completely. My experience with Zola made me realize that I really don’t want to go static route - I’d rather bite the bullet and do a typical web app, paying with some additional complexity, time, and hosting costs, but having full control of my digital presence.

And when I think about web programming, the most natural choice is, of course, Elixir and the Phoenix Framework. Elixir (together with Rust, Python, and Julia) is my favorite language for some years now, and Phoenix helps speed up development of web apps substantially thanks to LiveView.

So the version #3 was made with this setup. Overall, I was mostly satisfied with it, and the majority of the landing page text was written for that version. That’s how it looked:

Version #3 Header You can see that both some text & graphics were reused here.

Version #3 Content Some DRAMATIC color changes and such.

Version 4

I was almost ready to deploy it, when I saw the blog post announcing new Phoenix 1.7-rc. It so happened that this update addressed most of the non-ideal things in the version 3:

  • I couldn’t reuse LiveView function components in static templates.
  • Having both controller views & live views and switching between them and the corresponding coding styles was somewhat jarring.
  • Tailwind integration was a bit flaky.

So, I decided that I’ll try porting to that new pre-release version and see how it goes. Plus, I was falling out of love with the white/black contrast theme and wanted to try something a bit more chill visuals-wise.

I created a new project, ported and edited the content I already had and added some new features: e.g., this blog is using Nimble Publisher and I’m very happy with it.

A nice thing: Phoenix has very good defaults, so getting a high Lighthouse score is easy.

Lighthouse score for this website Lighthouse score for this website

The Forge

But there’s one more thing I decided to try while building this version #4 of the website. Web development is notorious for those highly repetitive but necessary dances: setup an authentication and authorization system here, add analytics there, figure out how to do CD for this, migrate that database, etc.

Considering that I don’t have a team of engineers by my side anymore, the thought of prototyping web apps and repeatedly doing all those things that will be identical for most of those apps N times didn’t appeal to me.

And thus the idea struck: I do already have a website! And I have an authentication system there (mix phx.gen.auth to the rescue), and now I will be setting up infrastructure, deployment pipeline, and analytics for it => why not use this as the basis for my prototypes?

One of the reasons I love Elixir is that it’s a highly modular language by design. I’ve written about various levels of modularity in Elixir before. In essence, I don’t even need to create an umbrella app to take advantage of it. A prototype for the app is often just one screen with this main interaction you want to nail. And LiveView provides one of the fastest ways to prototype yet. Tailwind makes styling effortless and prevents style bleed. Finally, functional components can be customized with slots & attributes, so I can also relieve myself of the “let’s style yet another button” quests.

So, my prototyping solution pretty much looks like this now:

scope "/forge", InariWeb do
  pipe_through([:browser, :require_authenticated_user])

  live("/xyz", Forge.XYZLive.Index)
  ...
end

Phoenix router code snippet

I can test the code in different browsers and on mobile easily - I just need to push it, login into this website, and I can access any of the prototypes at /forge/xyz. And anything I add to any of my prototypes can be extracted and reused in any other prototype or on the main website.

Now, once it come to extracting those into independent products, I see 3 possible solutions:

  1. Just create a new project and copy-paste the relevant code with minimal modifications.
  2. Convert this website’s project into an umbrella.
  3. Follow the monorepo approach without an actual umbrella + create libs for reuse across projects in the monorepo.

I don’t know which road I’ll take yet, but probably I will try #2 and #3 and see which feels nicer.

Project Cards

Since the web site was now dynamic, I could get rid of some repetitions and add some syntax sugar for myself. For example, the project cards on the landing page are not written in the template directly: they are generated at compile time by reading a TOML file that looks like this:

[[projects]]
id = "xyz_nlp"
title = "Native Language Features for a BI/AutoML Platform"
year = 2016
position = "Lead Developer"
consulting = true
company = { name = "...", website = ".." }
tags = ["AI/ML", "Product Management", "Data Science", "Data Engineering"]
tech = ["Scala", "Python", "Clickhouse", "AWS", "Kubernetes", "Docker"]
task = """
A BI/AutoML B2B SaaS company wanted to ...
"""
solution = """
...
"""
outcome = """
...
"""
feedback = """
...
"""

[[projects]]
...

priv/data/projects.toml file describes all project cards on the landing page

This file is read by the landing page’s live view at compile-time:

defmodule InariWeb.LandingLive.Index do
  use InariWeb, :live_view

  @external_resource Path.join(:code.priv_dir(:inari), "data/projects.toml")

  @projects Path.join(:code.priv_dir(:inari), "data/projects.toml")
            |> Toml.decode_file!()
            |> Map.get("projects")

  @project_tags Enum.flat_map(@projects, fn project -> project["tags"] end)
                |> Enum.sort()
                |> Enum.dedup()
  ...
end

Compile-time TOML reading and processing inside a live view

And the resulting data is passed to the live view on mount, after which it’s rendered in a template:

<div class="grid grid-cols-1 lg:grid-cols-3 gap-8 py-4 duration-400">
  <%= for project <- @projects do %>
    <%= if is_nil(@selected_project_tag) || (@selected_project_tag in project["tags"]) do %>
      <%= if is_nil(@expanded_project_id) || project["id"] != @expanded_project_id do %>
        <.project_card_collapsed project={project} />
      <% else %>
        <.project_card_expanded project={project} />
      <% end %>
    <% end %>
  <% end %>
</div>
...

lib/inari_web/live/landing_live/index.html.heex: landing page template

Which ends up looking like this:

Project Cards View Collapsed project cards view

An Expanded Project Card An expanded project card

So, adding new projects and doing things like filtering by category is slightly cleaner. Of course, there’s a downside to this approach too: you get small compilation speed hit and there’s more data to store in the live view process, but this is marginal and not perceptible at the load level I have.

But all of that is cool only as long as I don’t need to care about manual deploys & VM provisioning.

Infra & Continuous Deployment

Since I now wanted to host not only the landing website itself, but also a number of prototypes, I wanted to have an infrastructure and CD setup that:

  • can be easily scaled
  • does have some basic self-healing ability
  • allows me to route traffic flexibly without changing DNS records => I need some kind of a load balancer
  • is not insanely expensive.

I’m an old fan of Kubernetes. And yes, Elixir has amazing distribution abilities built-in, but you do need an orchestration layer on top anyways => you cannot expose distributed Elixir to unprotected network, and there’s nothing to help you with resource limiting and node provisioning. So over the years I always found that combining both of those is the best way forward - it allows you to abstract the hardware quite well, and you can use Elixir distributed programming capabilities if you need them safely when you’re inside a VPC managed by Kubernetes.

Of course, there’s another approach I could’ve taken, and which I plan to explore more for some of my prototypes in the future: going completely to the edge/serverless with fly.io, Cloudflare Workers (those don’t support Elixir, though) or GCP’s Cloud Run and such. However, Fly felt a bit too expensive, and I didn’t want to jump additional hoops/deal with limitations of not being able to run persistent processes. So, Kubernetes it was then.

But which Kubernetes? Kubernetes where? I had a misfortune of managing control plane & other Kubernetes components myself before, and I didn’t want to spend time on that. Managed Control Plane costs ~€70 per month on most clouds. Add to that the insane bandwidth costs, and this option didn’t look that attractive either.

So I spent some time looking around, and after some spreadsheeting and calculating my approximate usage, DigitalOcean looked like the best choice. Managed Kubernetes is free, you only pay for compute, storage, and network, and overall a lot of the services & capabilities of the platform looked nice to me.

Cloud Prices Comparison Spreadsheet The cost/capabilities comparison spreadsheet for different cloud providers

I ran mix phx.gen.release --docker and wrote a Helm chart, the main part of which you can see below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "inari.fullname" . }}
  labels:
    {{- include "inari.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "inari.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "inari.selectorLabels" . | nindent 8 }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repo }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 4000
              protocol: TCP
          env:
            - name: PHX_HOST
              value: "{{ .Values.inari.host }}"
            ...
          ...

The Deployment part of the Helm chart

apiVersion: v1
kind: Service
metadata:
  name: {{ include "inari.fullname" . }}
  labels:
    {{- include "inari.labels" . | nindent 4 }}
spec:
  type: "ClusterIP"
  ports:
    - port: {{ .Values.inari.port }}
      targetPort: http
      protocol: TCP
      name: http
  selector:
    {{- include "inari.selectorLabels" . | nindent 4 }}

The Service part of the Helm chart

I setup a DigitalOcean Kubernetes cluster and added a couple of additional resources for configuring an Nginx Ingress Controller, SSL certificate manager, and the Ingress object that routes traffic.

apiVersion: v1
kind: Service
metadata:
  annotations:
    # note custom DigitalOcean-specific annotations
    service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
    service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
    ...
  name: ingress-nginx-controller
  namespace: ingress-nginx
  ...
spec:
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer
  allocateLoadBalancerNodePorts: true
  ports:
  - appProtocol: http
    name: http
    nodePort: ...
    port: 80
    protocol: TCP
    targetPort: http
  ...

This Service object is used to configure Kubernetes-managed load balancer on DigitalOcean

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: inari-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  ingressClassName: nginx
  tls:
  - secretName: ...
    hosts:
    - 'lakret.net'
    - 'www.lakret.net'
  rules:
  - host: lakret.net
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: inari
            port:
              number: 4000
  ...

The Ingress object configures Nginx to route traffic to corresponding services

There were a couple gotchas, e.g. setting up certificate renewal was somewhat counter-intuitive and some documentation was outdated, but I figured it out. Compared to the insane bugs I encountered when working with GKE, that felt like a walk in a park anyways! And, the standard Kubernetes dashboard is supported out of the box, which is way better than for example Google’s attempt at making a “better” one.

Anyways, there’s only four things missing for basic deploy & CD:

Container registry

I just went with Github Packages for it. Configuring image pull secrets in Kubernetes is easy enough.

CD Pipeline

Github Actions are majestic and at this level of complexity - free. Since this is a very simple setup right now, I only have 2 jobs: one always runs on commit and does test run & docker push to the registry (this is also used to ensure that docker build work) and another one only runs on merge / commit to master and does the actual deploy.

...
- name: Deploy to DigitalOcean Kubernetes
  run: |
    helm upgrade --install \
      --set image.repository=ghcr.io/lakret/inari \
      --set image.tag=${{ github.sha }} \
      --set inari.host=lakret.net \
      --set inari.port=4000 \
      inari $GITHUB_WORKSPACE/ops/charts/inari
- name: Verify deployment
  run: kubectl rollout status deployment/inari
...

The deploy steps in the Github Actions pipeline

CDN

Cloudflare, of course. Nowadays, it also takes care of basic analytics. I plan to switch to Plausible Analytics later on though.

Cloudflare Analytics Some of the data available on Cloudflare Analytics free plan

And, since both of these analytics solutions are privacy-preserving, no need for the annoying cookie pop-ups!

Uptime Monitoring & Alerts

DigitalOcean takes care of that, since they now have free uptime & latency monitoring built in with basic alerting capabilities included.

DigitalOcean Latency Monitoring DigitalOcean’s Latency Monitoring

Europe & US East seem fast enough (and this is, as far as I can see, number for full reload, without CDN caching), and I plan to add another cluster in the US West later on.

Let’s hope that one day adding Southeast Asia cluster will be economically viable too :)

And thus, the new website went live. So far, I had no issues with uptime and the CD pipeline is very stable and quite fast, considering that I spent exactly 0 effort optimizing it so far :)

The CD pipeline in action The CD pipeline in action

What’s Next?

I started promoting my consulting services this week, and I’m looking for clients - so if you or the company you work for needs consultants, ping me! The contact form is on the landing page.

I also started working on the first prototype in my forge, so stay tuned - I will document my indie journey. And of course, I plan to use this opportunity to do more content: my YouTube channel is the focus, but I will also try to write here. At least, I don’t hate my blog anymore :)

If you enjoyed this content, you can sponsor me on Github to produce more videos / educational blog posts.

And if you're looking for consulting services, feel free to contact me .