As a Senior Web Engineer, I’ve spent years refining the art of talking to machines. Whether it’s writing clean React components, configuring complex Drupal systems, or architecting solutions on Google Cloud Platform, my comfort zone is the hands-on "Code" tab. So when Google AI Studio launched the new "vibe coding" feature, I was excited to put it to the test by creating my passion project: a Pixel Buyer’s Guide, now live at pixel.michaelbtech.com.
What I didn't expect was for this new way of development to challenge me not as a coder, but as a communicator.
The High-Level Hurdle
For those unfamiliar, "vibe coding" is less about syntax and more about conceptual intent—telling the large language model what you want, rather than how to build it. It’s supposed to be easier, but for someone like me, who defaults to technical specs, it was surprisingly difficult.
My initial prompts were too granular, too focused on the technical implementation details: "Use Tailwind CSS utility classes," "Implement a flex layout for mobile view," or "Create a state variable for the selected model." The LLM, being capable, would often execute these, but the iterations were slow because I was micromanaging it.
The breakthrough came when I forced myself to step back. Instead of dictating the code, I started describing the outcome and the user experience. My prompts changed to high-level statements like, "Design an aesthetically clean, dark-mode focused interface for choosing a new Pixel device," or, "Create a decision matrix that compares the latest Pixel phones, focusing on camera and battery life."
This forced me to pause my engineering brain and activate my product brain. It was a rigorous exercise in conceptualizing the goal entirely outside of the technical framework required to achieve it.
An Unexpected Stakeholder Skill
The most valuable takeaway from this experiment isn't a new coding trick; it's a profound improvement in my communication skills.
In my day job, I often collaborate with project managers, product owners, and other non-technical stakeholders at Phoenix Children's. I know my way around the technical stack (JavaScript, Angular JS, React JS), but translating a complex technical architecture into an accessible business goal can be tough.
Vibe coding is the perfect training ground for this. It demands that I can articulate the why and the what clearly, before delving into the how. If I can successfully prompt an LLM to build a web app by just describing its "vibe" and purpose, I can certainly explain the value of a complex API change to a stakeholder who only cares about the end user benefit. It teaches me to lead with value, not with variables.
Looking Ahead: Seeking Complex Backend Challenges
The "Pixel Buyer's Guide" is a solid start, but right now, it’s mostly a static front-end experience. To really stress test the capabilities of AI Studio and challenge myself with a genuinely useful feature, I’m ready to add some complexity to the backend.
I’m aiming for something that requires complex data modeling, API integration, or persistent user data, utilizing technologies like Firestore, which I’ve enjoyed working with on other projects. The goal is to evolve the simple guide into a powerful, data-driven tool for all my fellow Android enthusiasts.
If you’ve experimented with vibe coding, or if you have an idea for a challenging backend feature that fits a buyer's guide—I'm open to feedback and suggestions!