Balancing best practices when your time is limited

Balancing best practices when your time is limited

ยท

7 min read

๐Ÿ”” This article was originally posted on my site, [MihaiBojin.com](https://MihaiBojin.com/projects/golang/best-practices-vs-time?utm%5Fsource=Hashnode&utm%5Fmedium=organic&utm%5Fcampaign=top-promo "MihaiBojin.com"). ๐Ÿ””


In my previous article, I talked about focusing on product creation and avoiding the trap of endless technical research and other non-functional (yet fun) tasks.

And this is very hard for me because I love doing that research!

Alas, I set myself a goal, and I was going to achieve it!

I have started working on a larger project in public. I will be writing about it as I build it, so stay tuned if following such a journey is something you're interested in. Also, you can subscribe to my newsletter if you wish!


I am trying to bridge software development skills that I've garnered over my 17-year career in various tech companies with the scrappiness required to build a project in my spare time. Developing software in a large enough team for a big-ish company requires certain quality gates, gates that I am very proud to uphold in my day job! However, as a lone developer trying to get a project off the ground in the little personal time I have left, I have to make some tradeoffs to launch something.

Gergely talks about the extremes of shipping software to production. Many projects start the YOLO way, manually copying binaries to the production server and praying it won't crash (only to one day find that it did).

As much as this is the most pragmatic way to start a project (it's all about the code, worry about everything else later), I strongly believe in CI/CD pipelines and reproducible/reprovisionable environments!

Hence, one hard rule for this project is for every commit on main to reach the production environment without any manual follow-ups.


The task at hand is to find a mail trap service that can act as an MX host, intercept all emails sent to my domain, and save them in a database for further processing. I think the technical term for this is "Inbound Parse Webhook" but I like "mail trap" more.

Starting out, I thought this would be the easy bit. All I had to do was to get a service like Mailgun or Sendgrid to intercept all my emails and save them into a database. The problem, however, was that both these services' free plans are pretty limited, and the costs shoot up with the volume of emails.

Since using either of these services involves a learning curve, and because this is a personal project that will not generate revenue, I need to be mindful of any present and future costs. It only takes one spam bot to find my domain to create a headache...

So, I decided to write it from scratch!

Shocking, I know! ๐Ÿ˜€

This is not on point with my previous thoughts, maybe rightly so, but on the plus side, this approach has a few benefits:

  • I am learning a lot about SMTP - it's more complicated than you'd think.

  • I am refreshing my Golang skills.

  • Instead of waking up one day to an enormous bill, I risk that the service will be overwhelmed and stop processing requests; this is a risk I can live with, for now!

With that out of the way, the next step was to choose a tech stack and get it done. I mentioned Golang before, but I didn't start there. My first attempt was to use NodeJS with the Mailparser library; however, I found it cumbersome. It returned a mix of serializable and non-serializable objects, the latter of which I could not easily store in a database.

Again, it came down to time, and I felt that it was easier to build the same feature using Golang, with the added long-term benefit of generating tiny statically linked binaries (and containers) versus having to ship all of NodeJS and its many dependencies.

My initial thought was to rely on Google Cloud or Amazon AWS to deploy and run containers. This didn't pan out too well as the public endpoint of Lightsail container services supports HTTPS only, and it does not support TCP or UDP traffic; the story is quite similar for Google Cloud Run (see "HTTPS URLs").

At this point, I was faced with a decision.

Go back to deploying on Kubernetes (cost-expensive), get a server, and install Nomad (cool but time-consuming), or use a Virtual Machine.

I decided to keep things as simple as possible and try DigitalOcean's Droplets - cheap, fast to start up, and they have a Terraform provider.

I use Terraform to manage any infrastructure and configuration needed for my projects. Terraform would have been great for managing the whole flow if a container management solution had panned out. However, I am stuck with VMs, which are a bit more heavyweight, even with today's super-fast Firecracker micro VMs. Moreover, reprovisioning a VM is not as fast as I'd like (to be read, instant). So I went for a mixed solution: keep the VM running in perpetuity (but be able to reprovision it using my Terraform config; VMs do fail or get moved around and restart from time to time) and SCP the updated binary on the running VM with every commit.

I thought I had everything I needed with Digital Ocean's Droplets, up to the point when I discovered that their floating IPs do not support SMTP traffic.

Since I'm operating one (maybe more) MX hosts, I'd like to reuse the same public IPs and not change DNS records. Furthermore, in the unlikely eventuality that the VM needs to be reprovisioned, the IPs are likely to change at which point DNS caching may become problematic.

This all led me back to my old friends from Hetzner. I used to rent a server from them ten years ago, and I'm happy to see they're still around and thriving! Their hardware is based in Europe, which is a plus for privacy. Their Cloud VMs are super fast. They have a Terraform Provider. Their floating IPs support SMTP traffic (granted, you pay 4 EUR/month for using them in the first place, but hey, you can't have everything!)

Bingo!

Let me segue into a side topic. There is never a perfect solution when building software; there are always tradeoffs! I could have easily stuck with Digital Ocean and gotten a new IP every time the VM was restarted. I suspect this is such a rare occurrence that it would be a non-issue. However, if you build things expecting them to fail, you'll never have the motivation to complete them. As such, I dream that my project will become a successful site with many visitors that will need many VMs to process all the inbound email traffic. Thus, I plan to have the flexibility to achieve this goal by building on a solid foundation. I am not currently using Hetzner's Floating IPs because I do not feel like doubling my monthly cost. When I complete the project, I will, however, start incurring this cost (and probably double it yet again by spinning up a copy of the service in a different availability zone), to make it production-grade.

With all of this behind me, I had my stack:

  • Hetzner Cloud (2 VCPUs / 2 GB RAM at 5 eur/month; this is about four times cheaper than Lightsail's 7 USD 0.25 VCPU offering)

  • Hetzner Floating IPs (4 eur/month, optional)

  • Cloudflare for DNS records (free)

  • Terraform for provisioning the infrastructure and deploying the binary (free, state hosted on GitHub - which is a no-no for 2+ engineer teams, but hey, it works for me!)

  • Golang + go-smtp for processing incoming emails

  • Goreleaser to package the binary for Debian distros and a few SystemD scripts to manage the install/reinstall flow

  • Google Cloud Firestore as a Database (free for my level of traffic)

  • Google Cloud Logging for centralized log ingestion and processing (free for my level of traffic)

  • and of course GitHub to host my code (free)

Phew, that was longer than I thought!

I kept it to the narrative only, as it is very time-consuming to develop useful code excerpts. Let me know on Twitter if you're interested in a technical breakdown/tutorial.

Until next time, Mihai


If you liked this article and want to read more like it, [please subscribe to my newsletter](motivated-founder-807.ck.page/db1cf284bf "newsletter link"); I send one out every few weeks!

ย