Mocky: Track Your Progress, Build Your Confidence

A UX & Development Case Study on Building an AI-Powered Interview Practice Web App

My Role: Prudoct Designer / Full-Stack Developer

UX Research & Design, Front-End Development, AI API Integration, Prototyping & Testing

2025 · 3 months

Overview

Mocky is an AI-powered web app designed to help job seekers practice behavioral interviews, track their progress, and gradually build confidence.

Through UX research and iterative testing, I designed an experience that provides structured practice, AI-generated feedback, and progress visualization.

Tools & Technologies

Design

  • Figma for wireframing, UI layout, and design iterations

Front-End

  • React for building the front-end UI

  • Chakra UI for component-based styling and layout

AI

  • OpenAI API for behavioral question generation and response analysis

  • Whisper for speech-to-text conversion

The Problem

Problem Statement

Job seekers often lose confidence after repeated interview rejections, focusing more on mistakes than on progress. Without structured feedback or tracking, they struggle to see improvement and maintain motivation.

Initial HMW Exploration

At the beginning, I explored three directions for how AI might support job seekers in their interview preparation:

How might we help job seekers visualize progress and see their improvement over time?

How might we help job seekers visualize progress and see their improvement over time?

How might we use AI to analyze performance and give actionable feedback?

How might we use AI to analyze performance and give actionable feedback?

How might we design AI support that keeps job seekers motivated and emotionally encouraged during preparation?

How might we design AI support that keeps job seekers motivated and emotionally encouraged during preparation?

Research & Insights

To identify challenges in interview preparation and opportunities for AI support, I conducted interviews and a survey.

To identify challenges in interview preparation and opportunities for AI support, I conducted interviews and a survey.

To identify challenges in interview preparation and opportunities for AI support, I conducted interviews and a survey.

Interviews (3 participants)

I interviewed three job seekers via Zoom. Although their preparation styles varied, all lacked a structured way to evaluate their interviews. They showed strong interest in AI providing objective analysis (e.g., STAR response completeness) but little interest in AI-driven emotional support.

Survey (24 responses)

The survey results reinforced and quantified these findings:

  • 42.9% do not track their job applications, and 66.7% do not record their interview experiences.

  • 90% evaluate performance only through personal reflection.

  • 66.7% reported struggling with the lack of feedback from interviewers.

  • Job seekers wanted AI to help with performance patterns (81%), practice questions (81%), and feedback on clarity/relevance (71.4%).

  • Only 28.6% were interested in AI for motivation or emotional encouragement.

Key Findings

  • Job seekers lack structured methods to track interviews and progress.

  • Most rely on personal reflection, not external or structured feedback.

  • There is strong demand for objective analysis, tailored practice questions, and actionable feedback.

  • Repeated rejection lowers confidence, making preparation discouraging.

Key Pain Points

  • Tracking interviews is time-consuming, so most people skip it or rely on simple tools.

  • There is little to no post-interview feedback.

  • Many are uncertain how to improve and stand out.

  • Repeated rejection lowers confidence, making preparation feel discouraging.

The Emotional Need

Beneath these functional gaps lies a deeper need: job seekers want to feel in control of their progress and to rebuild confidence after setbacks. A tool that provides visible evidence of improvement can help shift their focus from failure to growth.

User Journey Map

Refined Direction

Based on my research, I refined how AI could support job seekers. At first, I considered ideas like emotional support, but the findings showed that users mainly wanted structured feedback, clear progress tracking, and tailored practice questions. This guided me to focus the design in those areas.

Based on my research, I refined how AI could support job seekers. At first, I considered ideas like emotional support, but the findings showed that users mainly wanted structured feedback, clear progress tracking, and tailored practice questions. This guided me to focus the design in those areas.

Based on my research, I refined how AI could support job seekers. At first, I considered ideas like emotional support, but the findings showed that users mainly wanted structured feedback, clear progress tracking, and tailored practice questions. This guided me to focus the design in those areas.

Refined HMW

How might we help job seekers visualize progress and see their improvement over time?

How might we use AI to analyze performance and give actionable feedback?

How might we design AI support that keeps job seekers motivated and emotionally encouraged during preparation?

How might we help job seekers visualize progress and see their improvement over time?

How might we use AI to analyze performance and give actionable feedback?

How might we design AI support that keeps job seekers motivated and emotionally encouraged during preparation?

Why AI?

To understand where AI can truly add value without replacing human strengths, I created a Cognitive Offloading Matrix. This framework maps out:

  • What should remain human-led

  • What can be fully offloaded to AI

  • Where a human–AI partnership works best

Design Solutions

During the ideation phase, I explored three design concepts, each addressing different aspects of interview preparation:

During the ideation phase, I explored three design concepts, each addressing different aspects of interview preparation:

During the ideation phase, I explored three design concepts, each addressing different aspects of interview preparation:

  1. Game-Based AI-Powered Interview Tracker

A structured and motivating system where users log interviews, receive AI-generated STAR-based feedback, and visualize progress through skill trees and points. This concept focuses on building confidence by making improvement visible.

  1. AI-Based VR Mock Interview System

An immersive, realistic mock interview experience using VR, where users can practice body language, tone, and timing in a safe environment. This concept reduces nervousness and builds confidence through simulation.

  1. Smart AI-Powered Glasses for Post-Interview Analysis

A conceptual tool that records real interviews (with consent), analyzes verbal and nonverbal communication, and provides feedback. It supports deep reflection for users who want detailed review of their performance.

After exploring these three concepts, I decided to focus on the interview tracker. It best addressed the biggest user need for structured feedback and visible progress tracking, and it was also more practical to build compared to VR or smart glasses.

The final version of Mocky combines the tracker with key elements from the other ideas:

  • From VR: behavioral questions and voice input for natural practice.

  • From smart glasses: AI post-performance review without recording real interviews.

After exploring these three concepts, I decided to focus on the interview tracker. It best addressed the biggest user need for structured feedback and visible progress tracking, and it was also more practical to build compared to VR or smart glasses.

The final version of Mocky combines the tracker with key elements from the other ideas:

  • From VR: behavioral questions and voice input for natural practice.

  • From smart glasses: AI post-performance review without recording real interviews.

After exploring these three concepts, I decided to focus on the interview tracker. It best addressed the biggest user need for structured feedback and visible progress tracking, and it was also more practical to build compared to VR or smart glasses.

The final version of Mocky combines the tracker with key elements from the other ideas:

  • From VR: behavioral questions and voice input for natural practice.

  • From smart glasses: AI post-performance review without recording real interviews.

Features

With the core direction defined, I began translating this concept into features.
I focused on designing a system that supports repeated interview practice—guiding users through practice, feedback, reflection, and long-term improvement.

Feature Framework

Core Practice

Question Bank

Question Bank

Question Categories

Question Categories

Text Answer Input

Text Answer Input

Voice Recording Input

Voice Recording Input

Learning & Feedback

Sample Answers

Sample Answers

AI Answer Evaluation

AI Answer Evaluation

AI Tone & Clarity Feedback

AI Tone & Clarity Feedback

AI Practice Guidance

AI Practice Guidance

Review & Progress

Recording Playback

Recording Playback

Practice History

Practice History

Progress Tracking Dashboard

Progress Tracking Dashboard

Personalization

Role-Based Question Sets

Role-Based Question Sets

Industry-Based Question Sets

Industry-Based Question Sets

Feature Prioritization

With many possible features on the table, I focused on what would most directly support meaningful interview practice in the first release.
I prioritized features based on their value to users and their feasibility for implementation using the MoSCoW framework.

Must Have

Must Have

Question Bank

Question Bank

Question Bank

Text Answer Input

Text Answer Input

Text Answer Input

Voice Recording Input

Voice Recording Input

Voice Recording Input

AI Answer Evaluation

AI Answer Evaluation

AI Answer Evaluation

AI Tone & Clarity Feedback

AI Tone & Clarity Feedback

AI Tone & Clarity Feedback

Should Have

Should Have

Question Categories

Question Categories

Question Categories

Sample Answers

Sample Answers

Sample Answers

Could Have

Could Have

Practice History

Practice History

Practice History

Progress Tracking Dashboard

Progress Tracking Dashboard

Progress Tracking Dashboard

Recording Playback

Recording Playback

Recording Playback

Would Not Have

Would Not Have

Role-Based Question Sets

Role-Based Question Sets

Role-Based Question Sets

Industry-Based Question Sets

Industry-Based Question Sets

Industry-Based Question Sets

AI Practice Guidance

AI Practice Guidance

AI Practice Guidance

Design Iteration

Early Sketches

Home Page

After creating early sketches, I identified several UI decisions that felt uncertain and could significantly shape the user experience.
To move forward with more confidence, I conducted a user survey to understand user expectations around these specific interface choices.

The survey focused on two questions:

  1. Whether users prefer interacting with an avatar, and if so, whether it should be a realistic human or an illustrated character.

The results showed a clear preference for realistic human avatars over illustrated avatars or no avatar.
Users indicated that a more human-like presence helps create a more realistic interview experience.

Realistic human avatar

Illustrated avatar

No avatar

  1. How users expect practice history to be organized: by date or by question categories.

The results showed that users preferred grouping practice history by question type, as it made it easier to review and compare performance across similar questions. Categorizing history by date was seen as less useful by some users.

History grouped by question categories

History grouped by date

AI Analysis Page

The AI analysis page went through two key iterations based on user feedback.


  1. Initially, the analysis page showed only a single score without indicating the scoring scale. User feedback revealed that this made the score difficult to interpret, so I redesigned the score display to provide clearer context.

First iteration

Second iteration

First iteration

Second iteration

  1. The initial version did not include example answers. Users expressed a need for clearer guidance on how to improve each part of the STAR framework. To address this, I introduced example answers for each STAR component, generated based on the user’s own response, helping users better understand how their answers could be strengthened.

First iteration

Second iteration

First iteration

Second iteration

Final Design

After multiple rounds of iteration and testing, the final version of Mocky includes several key features designed to support confidence-building and measurable progress:

  • AI-Generated Questions by Topic: Users select behavioral topics like “Teamwork” or “Leadership” to practice targeted skills.

  • Voice and Text Input: Users can respond using either speech or typing, depending on what feels more comfortable and accessible.

  • AI-Based Answer Analysis and Feedback: Each response is analyzed based on the STAR method, with personalized feedback highlighting strengths and areas for improvement.

  • Topic-Based History Tracking: All responses are saved and categorized, making it easy for users to review past answers within each topic.

  • Progress Visualization: Allow users to see growth over time and rebuild confidence.

This final version reflects a thoughtful integration of user feedback, design experimentation, and technical feasibility, focused on delivering a motivating, supportive experience for job seekers.

After multiple rounds of iteration and testing, the final version of Mocky includes several key features designed to support confidence-building and measurable progress:

  • AI-Generated Questions by Topic: Users select behavioral topics like “Teamwork” or “Leadership” to practice targeted skills.

  • Voice and Text Input: Users can respond using either speech or typing, depending on what feels more comfortable and accessible.

  • AI-Based Answer Analysis and Feedback: Each response is analyzed based on the STAR method, with personalized feedback highlighting strengths and areas for improvement.

  • Topic-Based History Tracking: All responses are saved and categorized, making it easy for users to review past answers within each topic.

  • Progress Visualization: Allow users to see growth over time and rebuild confidence.

This final version reflects a thoughtful integration of user feedback, design experimentation, and technical feasibility, focused on delivering a motivating, supportive experience for job seekers.

After multiple rounds of iteration and testing, the final version of Mocky includes several key features designed to support confidence-building and measurable progress:

  • AI-Generated Questions by Topic: Users select behavioral topics like “Teamwork” or “Leadership” to practice targeted skills.

  • Voice and Text Input: Users can respond using either speech or typing, depending on what feels more comfortable and accessible.

  • AI-Based Answer Analysis and Feedback: Each response is analyzed based on the STAR method, with personalized feedback highlighting strengths and areas for improvement.

  • Topic-Based History Tracking: All responses are saved and categorized, making it easy for users to review past answers within each topic.

  • Progress Visualization: Allow users to see growth over time and rebuild confidence.

This final version reflects a thoughtful integration of user feedback, design experimentation, and technical feasibility, focused on delivering a motivating, supportive experience for job seekers.

User Flow

System-Level Diagram

Demo Video

Result

The final prototype was evaluated through user testing and design critique sessions.

After revising the score presentation, the majority of users were able to correctly interpret their scores without additional explanation, addressing confusion present in the initial version.

In addition, most users found the STAR-based example answers helpful for identifying concrete improvements to specific parts of their responses.

Overall, these iterations improved the clarity, interpretability, and perceived usefulness of the AI feedback.

Next Step

Based on user testing and instructor feedback, several opportunities emerged for future iterations of the platform.

A key next step is improving personalization in AI feedback by incorporating users’ personal context, such as resumes or STAR notes, to reduce generic responses.

Future versions could also support company- and role-specific interview contexts to make practice more realistic.

Takeaway

Mocky began as a response to a common emotional experience: the frustration and self-doubt that follows repeated job interview rejections. Throughout the design and development process, I learned how powerful it can be to offer not just practice opportunities, but a way to help users see their own growth.

From interviews and surveys to A/B testing and iterative design, every stage of the project was shaped by real user needs, particularly the desire for structured feedback, visible progress, and a supportive interface.

Building this project pushed me to balance technical feasibility with user-centered decision making. I had to scope the features carefully, simplify the feedback loop, and stay grounded in what users actually wanted. One of the most important design lessons I learned is that even simple interfaces need emotional resonance, which is why design elements like real person images made a significant difference during testing.

© 2026 Claire Chen. All rights reserved.

Designed with honey and Pooh bear love. 🍯🐻

© 2026 Claire Chen. All rights reserved.

Designed with honey and Pooh bear love. 🍯🐻

© 2026 Claire Chen. All rights reserved.

Designed with honey and Pooh bear love. 🍯🐻