AI-powered assistant to help improve the experiences of users and help them get better results when prompting, through human-like conversations and more clarified feedback.
AI-powered assistant to help improve the experiences of users and help them get better results when prompting, through human-like conversations and more clarified feedback.


It is no news that artificial intelligence is taking the world by a storm. From the advent of “smart” chatbots, to Large Language Models (LLMs) and well…ChatGPT.
Since OpenAI launched ChatGPT 3.5 as a free tool in 2022, LLMs like Gemini, Claude and others have sprung up, generating helpful responses to assist individuals and professionals in many spheres.
However, even with the popularity of Chat GPT and other LLMs, many of these AI language models fell short, when required to explain their responses to prompts and ask clarifying questions to help users get better feedback during a prompt-and-response action.
And this is where the AI Why chatbot app came in.
I worked with Scott (product owner) and team of engineers to design and prototype the interfaces of AI Why; a conversational assistant powered by Meta’s Llama 3, with enhanced response features.
These enhanced features help users, in addition to the regular prompt-and response cycle, get explanations to separate parts of AI assistant’s responses. Users also get more clarified answers to prompts, could pick alternative prompts and follow-up prompt suggestions to further help them get better responses, and the app also featured smart switching and chat grouping functionalities.
This project is a bold step to simplify prompt generation and amplify positive results derived from the usage of LLMs by a large group of users.
The AI Why Conversational Assistant App at a glance
I started off this project to create user-friendly prototype designs for AI Why Chatbot app, in order to help users who find it difficult getting valuable outcomes while using chatbots, get better results through interactive features that will help increase their productivity.
To measure if I successfully solved users’ needs with this app, I monitored how easily users could find their way through the app, the number of users who successfully completed discussions in the shortest time on the app, and the time users would expend, while completing a discussion the app.
Image showing the design process adopted for this project. It is important to note that my process was not exactly linear but iterative. The process steps outlined however, served to ensure that the project was as user-centric as possible.
To begin the research, I needed to carry out a competitive study on existing AI-powered chatbots, and conversational interfaces (voice assistants), in order to determine what features were already offered by these technologies and what features were missing.
I focused on three direct competitors: ChatGPT, Gemini AI and an indirect competitor: Voice Assistants.
I decided to explore voice assistants in my competitive study, because they were AI technologies that provided some of the features we wanted to incorporate into the app, like, providing follow-up responses and asking clarifying questions.
Competitive Audit table showing strengths and weaknesses of existing chatbots and LLMs (Chat GPT, Gemini, Ask AI and some voice assistants).
My research revealed that ChatGPT, Ask AI and Gemini AI had limited to zero support for discussion categorization.
Also, all understudied AI chatbots except Ask AI, had limited support for response explanations and summaries, and none of the chatbots provided features to explain isolated parts of their responses.
This revealed new insights on the “WHAT” (features to design, to give a competitive edge), however, further research was needed to figure out the “WHY”, that is, the value proposition (usefulness to users) of such features in such an app.
Comparison Chart showing features that we wanted the AI Why app to have and the presence of these features in existing competitors.
After competitive analysis, I conducted a screener survey to recruit users whose experiences with chatbots and voice assistants, were determined to be beneficial to the study.
I conducted virtual interviews with six users in participation to learn about users’ goals, expectations and pain points.
Using user feedback, I created two user personas.
My research revealed some helpful insights about user behavior and how they usually interact with AI chatbots.
All groups of users confessed to struggling with generating the right prompts when trying to get helpful responses to solve their problems, and believed that their usage of AI chatbots would increase, if they got a little extra help.
Also, roughly 70% of users said it took “longer than necessary” to get the perfect responses that they were looking for, from AI chatbots.
At this point, I had a better understanding of users’ pain points. But the journey was far from over. In fact, I had only just begun.
To ensure that I stuck to the main goal of this project, I drafted out a problem statement:
“AI chatbot users need to quickly and easily get the best responses to their prompts in order to maximize their productivity, because it is currently difficult to find chatbots which provides helpful assistance, so they don’t feel frustrated and overwhelmed.”
Pain Points that users encountered while interacting with other Chatbots and LLMs. These pain points were used to create the Problem Statement that fueled this research.
Insights from research revealed that users often abandoned the experience whenever they don’t get the responses that they are looking for, minutes into using AI chatbots.
Users seemed to be trapped in a prompt-response-prompt loop (which I called the Loop of Unending Horrors) that made them feel stuck and unproductive.
It was therefore important to ideate features to incorporate into the app, which would take users out of this loop, and make the user’s flow through the app more productive.
Getting started with the design, I asked these questions:
“How do I incorporate all these features and still make the interaction of this chatbot memorable?”
“How do I solve the users’ problems in the shortest number of steps?”
I drew out digital wireframes, printed them out and conducted A/B testing on social media and groups, to determine the layouts that users found simple and intuitive.
Users preferred a playful, yet interactive approach to creating better prompts and getting explanations for responses.
Using feedback from users, the wireframes were refined until what was left was a design structure that was familiar, yet improved and enjoyable.
High-fidelity wireframes of the AI Why app user interface.
The next step was to convert the digital wireframes into high fidelity mockups and clickable prototypes in Figma; I had to consider how users typically interacted with colors and text.
From A/B Testing, users wanted an app that had catchy and relaxing color variations. They also wanted text to be readable and to be able to use features like response explanation and summary, without leaving their current flow.
It was tasking incorporating all these features in the prototype while considering design intuitiveness. Scott was really helpful, providing high-level feedback.
I also created a component library of all design assets were synchronized across a diverse range of platforms.
While converting the wireframes of the AI Why app into high-fidelity mockups, it was important to ensure that the structure and layout of the conversational interface was in such a way as to help users converse with the AI assistant, without getting bogged down by too much information which could cause user confusion.
With this in mind, each AI response was trailed by collapsible accordions where users could find alternative responses to their questions and follow-up prompt suggestions.
The next challenge was to create high fidelity prototypes to simulate how a typical user would interact with the conversational assistant. At this point, I referred back to the user flow created earlier to ensure that every link was in line with information gotten earlier from research.
The result? A highly interactive and intuitive conversational experience that got positive remarks from 95% of users during testing.
During testing, my team and I discovered that the AI Why Conversational Assistant couldn’t maintain context over longer conversations with users.
This inability to remember past interactions led to incorrect responses and user dissatisfaction.
To solve this issue, we implemented a chat grouping feature, that made it easier for the AI assistant to group similar queries into separate topics, which in turn, made it easier for responses to be tailored to users’ expectations.
Prototype showing grouped responses and chat history, which helped users keep track of different conversations they initiated with the AI Why Assistant.
I continuously subjected the prototypes created to a testing-feedback-iteration loop. This was done to ensure that all project goals and user expectations were met.
It was interesting to see how users could have really diverse opinions about a design, and though all features were not added in the first MVP, I ensured that the design was robust enough to accommodate future design modifications.
Product Performance Chart showing Micro-conversion Rate, Bounce Rate, Task Completion Rate, Prototype Crash Frequency and User Ratings Score after testing.
While I worked on this project, I learned that for the AI Why Conversational Assistant to win users’ trust, responses to user prompts had to be transparent; that is, open to user. Users should be able to give feedback and get clear explanations from the Assistant.
I also observed that in an AI-driven application like AI Why, it was important to look towards ethical considerations like the privacy of user data, cutting down on bias and ensuring that the AI assistant was only used responsibly. I ensured that features to flag a prompt that didn’t meet certain standards of ethics, were prototyped into the design.
I and the team worked in a fast-paced Agile environment, to ensure fast delivery of milestones and exchange of high-level feedback, round the clock.
Working with an Agile team on this project exposed me to the benefits of teamwork, the drawback of working in silos, and showed me how close collaboration was key to success during design conceptualization and implementation.
In order for user trust to be achieved, in the next sprint, we will be adding response source linking feature, so that users could see where data was being pulled out from, by the AI Why Assistant. Users would be able to either live preview sources on the app, or open the links in an external browser of their choice.