🐴 I ran the same prompt through three popular AI prototyping tools
Side-by-side comparisons, walkthroughs, and insights
To turn an idea straight into a prototype, which AI tool should you use?
V0 was popular last year. Then Bolt gained momentum—I wrote about the tool last week. Then I saw Lovable in the news.
Today, I’ll walk you through a quick experiment where I compare all three tools side by side, from a specific lens:
Use a detailed yet focused prompt.
Turn it to just a prototype, rather than full app deployment.
Share the results straight out of the prompt without any revisions.
Let’s get started!
I’ve been talking about building things with AI for four weeks—I promise I’ll switch to another topic next week!
Overview
Before diving into the experiment, here’s a brief overview of the three tools:
V0
Website: https://v0.dev
Maker: Built by Vercel’s Team, a platform designed primarily for deploying and hosting frontend applications. It’s the same team that built Next.js, a popular React framework (essentially a smarter, more structured way to use React).
Bolt
Website: https://bolt.new
Maker: Built by StackBlitz’s Team, a coding environment that lets developers build and run web applications directly in their browsers. It eliminates the need for local setup.
Lovable
Website: https://lovable.dev
Maker: Built by Lovable’s Team—a young startup, empowering users to turn ideas into working applications with minimal coding.
I pay attention to the makers behind these products because I think it will shape their strengths and future direction based on their team DNA:
V0 can benefit from Vercel’s expertise in UI generation and seamless deployment.
Bolt can benefit from StackBlitz’s browser-based, full-stack environment with rapid setup.
Lovable can benefit from its small team to move fast on AI-driven product innovations for users who don’t have much coding knowledge.
Alright, let’s jump into the experiment.
The prompt I used
I know the prompt box of these AI tools usually doesn’t handle formatting well, so I used a personal GPT that I built to generate a structured markdown file.
Essentially, I wanted to build a prototype for a product that helps freelancers to track and manage projects.
To achieve a better outcome, I narrowed the scope—only focusing on specific flows and being specific with the platform.
V0
I pasted the prompt into V0’s prompt window:
After 72 seconds, the result was generated:
(I counted, tapping a finger on the desk, as my eyes shifted between the clock and computer :) )
Overall, it looked good.
For the “Add New Project” page alone, it understood my prompt quite well.
I tested the input fields, selected a deadline from the calendar pop-up, and added and removed tasks without any problems.
However, it only addressed part of my prompt.
The dashboard (project overview) page was missing.
Aside from that, the "Add Project" tab didn’t work, and the "Dashboard" showed "Add Project" instead.
I didn’t get a chance to check if there was a backend that stored the project data.
Bolt
I pasted the prompt into Bolt’s prompt window:
After 57 seconds, the result was generated:
It created a dashboard for me that accurately addressed the details in my prompt.
Then I clicked into one of the projects.
Again, it captured my prompt well with all the interactive components.
I was able to add a new task, set a deadline, mark the task’s progress, and delete the task—exactly as what I described in the prompt.
The “Add New Project” page was almost the same with V0’s result, except there was a thoughtful “Cancel” button that could take me back to the main dashboard.
Lovable
I pasted the prompt into Lovable’s prompt window:
After 35 seconds, the result was generated:
The speed was fast, noticeably faster than V0 and Bolt.
The immediate result looked similar with Bolt’s.
However, I couldn’t enter into a project or add a project.
Actually, I couldn’t edit anything—it kept showing this 404 page.
Takeaways
This was a quick experiment using a specific prompt.
Before I share my takeaways, I want to call out that one test won’t fully show these tools’ capabilities, not to mention they are constantly evolving.
My takeaways can be subjective and biased based on this experiment.
Performance
Speed: Lovable >> Bolt > V0
UI Quality: Bolt > V0 = Lovable
Prompt Adherence: Bolt > V0 = Lovable
Functionality: Bolt > V0 > Lovable
Overall Impressions
Bolt’s result was very impressive. It strikes the best balance between speed, UI quality, prompt adherence, and functionality.
A common opinion online is that V0 has the best UI quality, Bolt’s UX is clunky, and Lovable is the best product on the market by far. I don’t really agree.
My experience with Lovable was disappointing. That said, I trust Lovable's small, independent team to evolve rapidly in this AI builder space.
These tools are good at creating a preliminary UI. However, the results can still be random without a visual reference.
V0
V0’s frontend capability was not as good as I expected.
It’s fascinating to see V0’s new effort in Figma and design system integration. It’s currently only available on its Pro plan.
V0 recently added backend capabilities. This can make it more competitive with Bolt and Lovable, which both started with solid backend integration.
Bolt
Bolt has an official open-source version: Bolt.diy. It allows users to choose the LLM for each prompt, such as OpenAI, Anthropic, Ollama, Gemini, Mistral, and the LLM that’s all over the news this week—DeepSeek.
I do think Bolt’s UX can be improved though. It now seems like a secondary product under StackBlitz, the sign-up flow was unclear, and the lack of version control bothered me. In comparison, V0 did a better job in these areas.
Miscellaneous
If you want more control over the outcome, it’s helpful to start small. For example, in my prompt, I used "Responsive Web (Mobile-optimized)" instead of just "Responsive Web".
I actually tested the latter in another experiment—it took twice as long for all the tools, and the result was poor in responsiveness.
After revising the prototype to a satisfying point, then it will become easier to scale the design to other platforms.
To save tokens, it's helpful to use Claude for a proof-of-concept test before diving into these tools.
Thanks for reading.
Happy Lunar New Year 🐍 to those of you who celebrate it!
Until next week,
Xinran
Interesting. Do you have any prompt hacks that can improve the UI because I feel like most of them look so basic and plain?
Have u taken similar cases with Replit and Heyboss?
Nice breakdown, Xinran.
Do you have recommendations on specific non-negotiable aspects for prompts that product developers should keep in mind? You probably published a resource on this earlier, but I seem to have missed it.