As a Software Engineer with over six years of experience building and maintaining Drupal sites, I’ve spent a lot of time in the weeds of enterprise-level systems. But when it came to my own personal blog, I found myself in a familiar spot: using WYSIWYG platforms like Squarespace and Wix. They’re great for getting started, but I missed the power and flexibility of a real CMS. I wanted to build something myself, something professional.

This project started with two simple, but slightly unusual, goals. First, I wanted to build the entire thing from my HP Elite Dragonfly Chromebook, using its built-in Linux environment to prove that you don’t need a massive desktop rig for serious cloud development. Second, I wanted to see if I could use an AI, specifically Google’s Gemini, as a collaborative partner to brainstorm, troubleshoot, and build out the infrastructure. This is the story of how it all came together.

The Blueprint: Designing a Modern, Scalable System

The first step was to lay out the architecture. The goal wasn't just to get a website online; it was to build a system that was scalable, secure, and followed modern best practices. We settled on a decoupled, cloud-native approach. For the web server, a Google Compute Engine VM running Debian gave me the full control I needed over the environment. Instead of installing a database on that same server, we opted for a managed Google Cloud SQL instance. This takes all the pain out of database administration and allows the web server and database to scale independently. For DNS, security, and content delivery, Cloudflare was the obvious choice to sit in front of everything.

With the cloud infrastructure designed, the focus shifted to the development workflow. For a true local development sandbox, we chose DDEV, a fantastic tool that uses Docker to create a perfect, isolated copy of the server environment right on my Chromebook. This allows me to build and test new features without ever touching the live site. The final piece of the puzzle was the CI/CD pipeline. We decided on GitHub for the code repository and GitHub Actions to automate the deployment process. The dream was simple: git push, and watch the changes go live.

The Gauntlet: A Series of Unfortunate (But Instructive) Errors

Of course, no system design plan survives first contact with reality. The journey from blueprint to a working pipeline was a series of hilarious and humbling troubleshooting sessions—a true "permissions gauntlet." The first challenge was a classic dependency mismatch. My server was running PHP 8.3, but my local machine wasn’t, leading to a cascade of Composer errors. This was a great reminder of the importance of keeping your development and production environments in perfect sync.

Next came the disappearing IP address. After resizing my VM for the first time, my site went down with a Cloudflare 524 error. It turns out, Google Cloud had assigned my server a new IP address, and I had to scramble to update it in Cloudflare, my database connection settings, and my GitHub secrets. I promptly assigned a static IP to the VM to ensure that never happened again. The real boss battle, however, was getting the CI/CD pipeline to work. Our initial attempts to deploy code via SSH were met with a wall of Permission denied (publickey) errors. We pivoted to using a Google Cloud Service Account—a more modern and secure way to handle authentication—which finally got us connected.

But even then, the server's own Linux permissions fought back. The deployment script couldn't write files to the protected web directory. We solved this by implementing a more robust deployment pattern: the script now copies the new code to a temporary staging directory on the server and then uses a sudo command to sync the files into the live directory. This was a huge breakthrough, but it led to one final, heart-stopping moment. The script, in its quest to make the live server a perfect mirror of the repository, deleted my entire user-uploaded files directory because it wasn't in the Git repository. Thankfully, I had snapshots of my server's disk and was able to restore the files. It was a stark lesson in the power and danger of automated scripts, and we immediately added an --exclude flag to the command to protect that directory forever.

The Grand Finale and What's Next

After navigating the permissions gauntlet and surviving the great file deletion, the final pieces clicked into place. The drush commands started working, the pipeline lit up with green checkmarks, and the automated deployment was a success. The system now works exactly as designed. I can make a change on my Chromebook, push it to GitHub, and watch as my new feature or blog post goes live automatically, with automated tests confirming everything is good to go.

This project has been an incredible learning experience. It proved that a Chromebook is more than capable of being a primary development machine, and collaborating with an AI like Gemini was like having a tireless, knowledgeable partner to bounce ideas off of and help debug cryptic error messages.

So, what’s next? Now that the foundation is solid, the fun part begins. In Part 2, I’ll dive into the process of creating a custom, minimalist theme for the site and start building out the features that will make this blog my own. Stay tuned.

Tags