Introduction
My AI book recommendation mini-program “Miaotang Green Forest” has finally launched.
From the initial idea to the mini-program’s approval, it took over half a month. When I finally found it on WeChat, instead of excitement, I felt a sense of relief.
The Motivation
As an avid reader with a full bookshelf, I often struggle to decide what to read next—not for lack of options, but because I don’t know what suits me. Recommendations from friends aren’t always fitting, and book reviews often feel like soft promotions. I wondered if AI could analyze my reading preferences and suggest truly suitable books.
During this time, the concept of AI programming was trending, with claims that “even beginners can develop mini-programs” and “create an app in three minutes using AI.” Considering my needs, which seemed simple—user input, AI analysis, and book recommendations—I thought tools like Coze could help.
Thus began my journey of DIY development.
The Process
Week 1: Design Phase
You might think AI programming is as simple as saying, “Create a mini-program for me.” That’s not the case. I spent two to three days clarifying the functional logic: how users input preferences, how to convert those into AI-understandable prompts, and how to present the recommendations. At this stage, AI couldn’t assist at all; it was all about my own thinking.
Week 2: Development Phase
I started building it. Using Coze to set up the workflow was indeed simpler than coding, but “simple” is relative. I had to learn how to debug prompts, optimize response logic, and handle various edge cases. A minor issue troubled me for two days: the user input sometimes was too long or too short, leading to unstable AI responses. I tried dozens of prompt combinations before finding a barely usable solution.
Week 3: Bug Fixing and Experience Optimization
After creating the initial version, I tested it myself and found numerous issues. For instance, sometimes the recommendations were too vague, and other times too niche. There was a ridiculous bug where certain input combinations triggered an infinite loop, causing the page to freeze. Fixing bugs sounds easy but is genuinely frustrating. Each bug involved a cycle of “investigation → localization → attempt → failure → retry.”
Week 4: Registration, Filing, and Review
This was the real hard mode. As an individual developer, I had to register for a mini-program account, undergo enterprise certification (individual categories have restrictions), file for a domain name, and submit for review. The review was returned twice, citing “need for additional proof of authenticity.” Each revision took three days, so two revisions meant six days.
When the mini-program finally went live, I calculated that the actual development took only three to four days; the rest of the time was spent on bug fixing, adaptation, optimization, filing, and review.
Reflection
Looking back at the hype around “AI enabling everyone to develop programs,” I felt one thing: it’s not that simple.
It’s not that AI tools aren’t useful; they do lower the barrier to coding. However, there’s a vast chasm between “writing code” and “creating a usable application.” Coding is just the surface; the real time-consuming tasks are requirement clarification, logic design, bug fixing, experience refinement, and compliance—areas where AI can help only minimally.
Recently, I came across some interesting data.
The head of Claude Code publicly opposed the term “Vibe Coding,” arguing that it’s too casual and that AI programming is actually a form of “engineering-level collaboration.” His point is that AI tools enhance developer efficiency, not allow people to “code by feel.”
Another research organization, METR, tracked 16 experienced developers using AI tools for 246 coding tasks. The result showed that efficiency actually decreased by 19% after using AI.
— Note that it decreased. Yet, the developers estimated a 20% increase in efficiency. This means there was a nearly 40 percentage point gap between subjective perception and objective data.
A report from Fastly was even more direct: among nearly 800 developers, at least 95% spent extra time fixing AI-generated code.
A senior developer with 15 years of experience shared that using “Vibe Coding” to rush project timelines resulted in a mountain of bugs, forcing them to start over completely. News reports mentioned her crying for half an hour during a live stream.
When I read these accounts, my first reaction was, “That sounds exaggerated,” but upon reflection, it felt quite real.
A new profession has emerged in the industry called “Vibe Coding Cleanup Specialist”—people who specialize in cleaning up the mess left by AI-generated code, reportedly rising on job boards.
A developer community summed it up well: When using Vibe Coding for projects, time allocation is approximately 50% for writing requirements, 10-20% for vibe coding, and 30-40% for vibe fixing. In other words, the time you think you’re spending on “coding” is actually less than 20%; the rest is spent either writing requirements or filling in the gaps.
Conclusion
So, is Vibe Coding reliable?
My answer is: it depends on what you want to do.
If it’s just for personal use or to create a demo for experience, AI programming can indeed save you some effort. However, if you aim to create a genuinely deployable product with commercial value, relying solely on “vibe” is certainly insufficient—you will need a lot of time, patience, and mental fortitude to face bugs.
This isn’t meant to be discouraging. I just want to honestly document my experience.
Was it worth spending half a month on a “book recommendation assistant”? My answer is yes, as it solved a real problem for me.
But if someone asks me whether “AI programming makes development easier,” I would say: it has become a bit easier, but at the cost of being prepared to deal with the pitfalls it creates.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.