INDUSTRY
SOFTWARE ENGINEER & UX DESIGN
CLIENT
PUBLIC
2025
RealView
About RealView
RealView is a web application that helps users practice job interviews through realistic, role-based simulations and instant feedback. Users can upload a real job description or choose from pre-made roles, and RealView generates tailored interview questions, provides performance feedback, and tracks improvement over time.
Built with Next.js 15, React, Tailwind CSS, Firebase, and OpenAI’s GPT-4 API, RealView explores how AI can assist people in developing confidence, clarity, and communication skills, without losing the human touch.
The Problem
Many people struggle to prepare for interviews in a way that feels authentic. Traditional mock interviews are time-consuming, and generic AI chat tools often lack structure or empathy. The challenge was to design an experience that feels personal, credible, and supportive, while maintaining consistency in AI behaviour.

Process & Challenges
I began by interviewing students and recent graduates to understand their biggest frustrations with interview preparation. The main insights were clear: feedback from AI tools often felt vague or robotic, users wanted to know why they received certain ratings, and the overall experience needed to feel calm and confidence-building, not like a test.
Using these findings, I mapped out conversational flows and micro-interactions in Figma that would mimic a natural coaching experience. From there, I built the AI feedback engine using structured JSON schemas, ensuring each response was consistent, interpretable, and emotionally balanced.
Each AI reply was designed to include:
A quote from the user’s own answer.
A 10-point rubric that explained the score.
A next-step suggestion written in supportive, approachable microcopy.
Early prototypes revealed several challenges. The biggest was AI reliability — responses sometimes lacked structure or emotional consistency. I solved this by refining prompt tone, enforcing schema validation, and adding evidence-based reasoning to every feedback block.
Another challenge was maintaining user trust during latency. Slow responses created uncertainty, so I introduced progress indicators, subtle audio cues, and friendly loading copy such as:
“Give me a moment, I’m reviewing your response carefully.”
These small touches reassured users that the system was thinking, not frozen.
I also tackled error resilience by adding try–catch–finally logic, fallback tips, and clear recovery messages to prevent UI stalls during network interruptions.
Finally, I focused on visual cohesion, blending a technical AI workflow with a calming, confidence-focused interface. Using Raleway typography, soft purple gradients, and rounded components, the interface visually communicates approachability and growth over performance.

The Results
RealView taught me how to balance AI complexity with human emotion, how tone, pacing, and copy can transform a technical product into something that feels supportive. It was a lesson in designing not just for accuracy, but for trust.
Summary Outcomes:
Developed a high-fidelity, deployable prototype hosted on Vercel with a fully functioning AI pipeline.
Enhanced AI interpretability, feedback now quotes user answers, rates performance with a 10-point rubric, and provides targeted practice items.
Improved load time by 50% through pre-fetching and asynchronous data flow.
Conducted two usability tests confirming smoother navigation, clearer feedback, and stronger user engagement.
Delivered a cohesive, explainable AI experience that demonstrates how human-centred design and prompt engineering can coexist in real-world applications.

