🐴 How to create AI-powered wireframes in Figma (Episode 1)
What I learned from testing a plugin
Can AI help generate wireframes in Figma? That could be quite useful.
However, I found that many AI plugins in Figma focus on other specific tasks, such as rewriting or summarizing text, which ChatGPT can also do. Only a few plugins are designed for “designing” and they are currently even less useful due to their limited capabilities, bugs, and paywalls.
I've tried tools like Relume and Musho before, but both have limitations:
Relume is great for generating wireframes, but it doesn't yet work smoothly with Figma.
Musho, on the other hand, is a convenient plugin in Figma and generates high-fidelity designs directly, but the designs it creates often don't match closely with my prompts.
That said, there are a handful of promising ones.
Today, I'll explain how to create wireframes with Wireframe Designer. I'll reveal the capabilities and limitations of the tool by feeding different prompts.
I hope this article can go beyond simple “how-to” guides and open up a broader conversation about how we can improve in the challenging field of converting prompts into useful wireframes that empower our design process.
The prompts for testing
To better gauge the capabilities of this plugin and compare the different results, I intentionally crafted three prompts:
Prompt 1: A generic, one-line prompt.
Prompt 2: It adds more specific details to Prompt 1.
Prompt 3: It adds background information of the product and target users to Prompt 2.
I used a web page of ClassPass as an inspiration for the prompts. I was curious to find out what the plugin would generate based on my text-based prompts.
The Results
Prompt 1
Create a wireframe with a fitness class schedule and map.
Given the minimalist prompt, the result was not bad. Now let’s see what happened with more specific details provided in the prompt.
Prompt 2 = [Prompt 1] + [Details]
Create a wireframe with a fitness class schedule and map. It shows a list of exercise classes on one side and a map with blue spots on the other side. The list has different workout times and types, like gym sessions and yoga, along with ratings and numbers of reviews. Each workout needs you to use certain amount of "credits" to join. The map shows where in San Francisco you can go to do these workouts. There's also a “Chat” button on the map to talk with someone from the website.
Surprisingly, the results seemed random with less information provided. My prompt seemed to have confused the plugin. Details like “different workout times”, “number of reviews”, and “San Francisco” were clearly missing. The “Chat” button was located outside the map instead of “on the map”.
Now, let’s see what happened with more contextual information.
Prompt 3 = [Prompt 2] + [Product Background Info] + [Target User Info]
Create a wireframe with a fitness class schedule and map. It shows a list of exercise classes on one side and a map with blue spots on the other side. The list has different workout times and types, like gym sessions and yoga, along with ratings and numbers of reviews. Each workout needs you to use certain amount of "credits" to join. The map shows where in San Francisco you can go to do these workouts. There's also a “Chat” button on the map to talk with someone from the website.
The product is a subscription service that offers access to a wide range of fitness classes, wellness experiences, and even beauty appointments. Members purchase a plan that gives them a certain number of credits each month, which they can then spend on booking classes and services.
The target users are individuals who seek variety and flexibility in their fitness routines. It's particularly popular among busy professionals who travel often and want to keep up with their workouts on the go, fitness enthusiasts who enjoy trying out new types of classes, or those who get bored with routine and prefer to mix things up.
Note that an interesting feature of the plugin is that in its settings tab, you are able to feed in more context.
Is it better? Arguably. It showed more information this time, such as the class ratings, but the issue of randomness still existed.
Furthermore, the additional context that I provided did not result in any meaningful improvements. I have included my reflections at the end of this article.
Prompt 4—A totally different prompt
Out of curiosity, I tested a new simple prompt about a different product. I suspected my previous prompt of a fitness schedule was too complex for the plugin.
A homepage for a fashion shop
Here is the result:
The result was better than what I expected. The cards had a variety of details. My mind would go blank if you asked me to create a homepage for a fashion store in just a few seconds. So, on that note, this is a good starting point for me for inspiration.
Test Summary
Pros
It can create wireframes for both desktop and mobile platforms.
The speed is faster than other tools I have tested, taking under 3 seconds.
The results adhere to my prompts better than those from other tools I have tested.
The level of detail is not great, but already better than other similar tools.
The wireframes are created with auto layout, making it easy for me to edit them in Figma.
I can choose to provide more contextual information about the product background and target users.
Cons
I wish it could create more than one options at a time, as it could better help me brainstorm in the wireframing stage.
The results are still not predictable enough. They do improve when I provide additional context, but there is still a lot that gets lost in translation from human language to a machine-comprehensible format on the backend.
It doesn’t have features to customize or iterate the results.
It sets the limit of 10 designs under the free plan.
Reflection
There is still a long way to leverage AI to generate useful wireframes based on prompts. I believe it is a two-way street to achieve better results with tools like this.
[From our perspective as the tool’s users] How might we create prompts that are more comprehensible to AI? In other words, how can we reduce the loss and misunderstanding that occur when we write a prompt in human language, expecting AI to accurately interpret our intentions and produce the desired results?
[From the tool creators' perspective] How might we feed better data to the AI model and set better parameters, so that the tools can more precisely capture user needs and create more relevant and helpful results?
I'll leave this as an open-ended conversation. There are certainly many limitations, but given the rapid advancements in AI, I'm confident that it won't be long before we overcome the current challenges and achieve better results.
Thanks for reading. If you enjoyed this issue, please consider sharing this newsletter to a friend who might benefit from it.
Happy Tuesday!
—Xinran
Questions? Comments? Just reply to this newsletter.