
The Problem
Problem Statement
Job seekers often lose confidence after repeated interview rejections, focusing more on mistakes than on progress. Without structured feedback or tracking, they struggle to see improvement and maintain motivation.
Initial HMW Exploration
At the beginning, I explored three directions for how AI might support job seekers in their interview preparation:
Research & Insights
Interviews (3 participants)
I interviewed three job seekers via Zoom. Although their preparation styles varied, all lacked a structured way to evaluate their interviews. They showed strong interest in AI providing objective analysis (e.g., STAR response completeness) but little interest in AI-driven emotional support.
Survey (24 responses)
The survey results reinforced and quantified these findings:
42.9% do not track their job applications, and 66.7% do not record their interview experiences.
90% evaluate performance only through personal reflection.
66.7% reported struggling with the lack of feedback from interviewers.
Job seekers wanted AI to help with performance patterns (81%), practice questions (81%), and feedback on clarity/relevance (71.4%).
Only 28.6% were interested in AI for motivation or emotional encouragement.
Key Findings
Job seekers lack structured methods to track interviews and progress.
Most rely on personal reflection, not external or structured feedback.
There is strong demand for objective analysis, tailored practice questions, and actionable feedback.
Repeated rejection lowers confidence, making preparation discouraging.
Key Pain Points
Tracking interviews is time-consuming, so most people skip it or rely on simple tools.
There is little to no post-interview feedback.
Many are uncertain how to improve and stand out.
Repeated rejection lowers confidence, making preparation feel discouraging.
The Emotional Need
Beneath these functional gaps lies a deeper need: job seekers want to feel in control of their progress and to rebuild confidence after setbacks. A tool that provides visible evidence of improvement can help shift their focus from failure to growth.
User Journey Map
Refined Direction
Refined HMW
Why AI?
To understand where AI can truly add value without replacing human strengths, I created a Cognitive Offloading Matrix. This framework maps out:
What should remain human-led
What can be fully offloaded to AI
Where a human–AI partnership works best
Design Solutions
Game-Based AI-Powered Interview Tracker
A structured and motivating system where users log interviews, receive AI-generated STAR-based feedback, and visualize progress through skill trees and points. This concept focuses on building confidence by making improvement visible.
AI-Based VR Mock Interview System
An immersive, realistic mock interview experience using VR, where users can practice body language, tone, and timing in a safe environment. This concept reduces nervousness and builds confidence through simulation.

Smart AI-Powered Glasses for Post-Interview Analysis
A conceptual tool that records real interviews (with consent), analyzes verbal and nonverbal communication, and provides feedback. It supports deep reflection for users who want detailed review of their performance.

Features
With the core direction defined, I began translating this concept into features.
I focused on designing a system that supports repeated interview practice—guiding users through practice, feedback, reflection, and long-term improvement.
Feature Framework
Core Practice
Learning & Feedback
Review & Progress
Personalization
Feature Prioritization
With many possible features on the table, I focused on what would most directly support meaningful interview practice in the first release.
I prioritized features based on their value to users and their feasibility for implementation using the MoSCoW framework.
Design Iteration
Early Sketches
Home Page
After creating early sketches, I identified several UI decisions that felt uncertain and could significantly shape the user experience.
To move forward with more confidence, I conducted a user survey to understand user expectations around these specific interface choices.
The survey focused on two questions:
Whether users prefer interacting with an avatar, and if so, whether it should be a realistic human or an illustrated character.
The results showed a clear preference for realistic human avatars over illustrated avatars or no avatar.
Users indicated that a more human-like presence helps create a more realistic interview experience.
Realistic human avatar
Illustrated avatar
No avatar
How users expect practice history to be organized: by date or by question categories.
The results showed that users preferred grouping practice history by question type, as it made it easier to review and compare performance across similar questions. Categorizing history by date was seen as less useful by some users.
History grouped by question categories
History grouped by date
AI Analysis Page
The AI analysis page went through two key iterations based on user feedback.
Initially, the analysis page showed only a single score without indicating the scoring scale. User feedback revealed that this made the score difficult to interpret, so I redesigned the score display to provide clearer context.
The initial version did not include example answers. Users expressed a need for clearer guidance on how to improve each part of the STAR framework. To address this, I introduced example answers for each STAR component, generated based on the user’s own response, helping users better understand how their answers could be strengthened.
Final Design
User Flow
System-Level Diagram
Demo Video
Result
The final prototype was evaluated through user testing and design critique sessions.
After revising the score presentation, the majority of users were able to correctly interpret their scores without additional explanation, addressing confusion present in the initial version.
In addition, most users found the STAR-based example answers helpful for identifying concrete improvements to specific parts of their responses.
Overall, these iterations improved the clarity, interpretability, and perceived usefulness of the AI feedback.
Next Step
Based on user testing and instructor feedback, several opportunities emerged for future iterations of the platform.
A key next step is improving personalization in AI feedback by incorporating users’ personal context, such as resumes or STAR notes, to reduce generic responses.
Future versions could also support company- and role-specific interview contexts to make practice more realistic.
Takeaway
Mocky began as a response to a common emotional experience: the frustration and self-doubt that follows repeated job interview rejections. Throughout the design and development process, I learned how powerful it can be to offer not just practice opportunities, but a way to help users see their own growth.
From interviews and surveys to A/B testing and iterative design, every stage of the project was shaped by real user needs, particularly the desire for structured feedback, visible progress, and a supportive interface.
Building this project pushed me to balance technical feasibility with user-centered decision making. I had to scope the features carefully, simplify the feedback loop, and stay grounded in what users actually wanted. One of the most important design lessons I learned is that even simple interfaces need emotional resonance, which is why design elements like real person images made a significant difference during testing.










































