Repartee AI
Designing Trustworthy Interfaces for Custom AI Agents
Repartee AI is a startup building a customizable AI assistant platform focused on minimizing hallucination in LLM responses. I served as the product manager overseeing user experience, partnering closely with an engineering-focused PM to define and execute the product vision. We reported directly to the CEO and led cross-functional collaboration between engineering and design.
Project Timeline
8 Month
Team
1 UX-Focused PM (me)
1 Engineering PM
1 Product Owner
5 Engineers
3 Designers
My Role
Market Research
UX Strategy
UX Design
Tools
Notion
Google Sheets
Figma
Slack
“We built an AI that could talk to documents. But what if it told you the wrong thing — confidently?”
Overview
Context
Retrieval-Augmented Generation (RAG) systems are widely used in enterprise AI — but even with retrieval, LLMs still hallucinate. That’s a critical risk for high-stakes use cases.
Business Problem
Enterprise teams needed visibility and control around hallucination — not just vague confidence scores. Our MVP didn’t yet reflect that.
Problem Statement
AI agents often produce unreliable or misleading outputs (hallucinations), but users lack transparency and control
—especially in early-stage platforms built with open-source models.
Target Audience
Need clarity, control, and confidence over AI outputs
Tech-savvy early adopters exploring custom AI agents
Enterprise clients & investors expecting stable, low-risk AI applications
“I don’t care if it’s 80% accurate — I care when and why it’s wrong.” - early user
HMW Statement
How might we reduce uncertainty for technical and business users reviewing AI model outputs?
Discovery
What & So What
Cross-team interviews revealed disconnect between backend flexibility and user comprehension—users needed clearer guidance and feedback.
Competitive Analysis of 7+ GenAI tools: LangChain, Claude, Pinecone, RunPod - showing most AI platforms overwhelmed users with technical complexity.
Rapid prototyping validated that visual clarity + modular navigation reduced task drop-off.
Iteration based on team feedback led to consolidated UI patterns and scalable layout templates.
Solution & Result
My Contributions
Redesigned key flows: Billing, Notifications, Landing Page, Monitoring Dashboard
Created high-fidelity prototypes with responsive layouts; supported dev handoff via Figma specs
Introduced modular components that scaled across 5+ use cases
Delivered before/after flows to stakeholders and investors, enhancing product clarity and credibility
Helped frame the value proposition visually: “Custom AI, Zero Hallucination Tolerance”
Key projects:
🌱 HuupAI Landing Page – Highlighted use cases and visualized safety mechanisms
📚 Librarian AI Assistant – Designed conversational UX for document navigation
User Flow
Challenge 1: User Dashboard
We included user dashboard and chat system on users' profile. This satisfy the Users' needs -- making connections with real estate agents, sellers, buyers and even lawyers. The challenge then became: how to incorporate the business goal to my design?
Challenge 2: Detailed Property Page
We included user dashboard and chat system on users' profile. This satisfy the Users' needs -- making connections with real estate agents, sellers, buyers and even lawyers. The challenge then became: how to incorporate the business goal to my design?
Challenge 3: Finance Page
Creating a platform for finance is pivotal to our website. It's a stepping stone for users who want to bake E-Stack into a part of their leasing/purchasing process.
Design Showcase
Chat-like AI Hallucination Detection UI
Show: Final interface (GPT-style chat + hallucination alert layer)
Annotate: Confidence scoring, feedback triggers, source highlighting
Before/after: Early version vs. final clarity-focused version
Hallucination Detection Dashboard
Show: Query logs, confidence analytics, source trace tree
Callouts: How information architecture reflects enterprise usage
Landing Page V2
Before/after layout comparison
Headlines that clarified value prop
CTA improvements + feature scannability
Visual Add-ons (optional in Figma)
Competitive landscape matrix (features vs. competitors)
Product narrative diagram: “Why Trust is the Product”
IA map for platform tools (LLM Studio, ML Studio, Admin)

Product Strategy & UX Architecture
My Contributions
Designed an async documentation system to streamline PM–engineer–design collaboration (reduced meetings by 50%)
Co-led roadmap prioritization: positioned hallucination detection as our core product wedge
Led the Landing Page V2 redesign to clarify product messaging for both developers and investors
Key UX Flows
Hallucination flagging & explanation in chat-style UI
Confidence score thresholds + inline source matching
Dashboard overview: track failure rates across queries
Outcomes & Impact
Metric
Impact
Enterprise user adoption
⬆️ 35%
Retention rate (4 weeks)
⬆️ 20%
Dev/design sync efficiency
⬇️ 50% sync meeting time
Investor narrative alignment
✅ Helped close product-market fit story
Reflection
✅ What went well: Fast iteration cycles, strong collaboration with PMs, visual storytelling that resonated with stakeholders
🔄 What I’d do differently: Push for earlier user testing beyond internal feedback
💭 Even better if…: We had more longitudinal usage data and user interviews
What I learned
How to make LLM risks legible through good UX
Balancing vision + feedback + feasibility in ambiguous startup environments
Design is not just about screens — it’s also about the story you tell
What If I Had More Time?
A/B Test the hallucination alert UI (confidence clarity, placement)
Design a “mitigation recommendation” layer (e.g., suggest alternate phrasing)
Add user feedback loop on flagged hallucinations to train detection engine
Mobile support: Quick preview of dashboards and flagged queries on the go
Analytics Drill-down UI: Click to see per-query or per-user patterns
Takeaways
Good UX isn't just about usability—it’s about trust design, especially in AI. Designers must mediate between user clarity and backend complexity.
This project helped me sharpen my ability to:
Communicate complex AI problems to users and stakeholders
Build clarity-first products in the B2B AI space
Translate abstract risks into usable, trustworthy interfaces
Thank you.