— AI Product · PM-minded Design · 2024

Repartee AI

Designing for trust when the AI doesn't always tell the truth.

Repartee AI is a startup building a customizable AI assistant platform focused on minimizing hallucination in LLM responses. I served as the product manager overseeing user experience, partnering closely with an engineering-focused PM to define and execute the product vision. We reported directly to the CEO and led cross-functional collaboration between engineering and design.

Project Timeline

8 Month

Jan -Sep 2024

Team

1 UX-focused PM (me)

1 Dev-focused PM

1 Product Owner

3 Developers

2 AI/ML researchers

1 UI Designer

My Role

Product Design

Research

R&D

Tools

Figma

Notion

Lark

Google Sheets

Slack

“We built an AI that could talk to documents. But what if it told you the wrong thing — confidently?”

Goal

  1. Make AI uncertainty visible, not hidden.

  2. Translate probability → plain language.

  3. Bridge engineer dashboards and business decision tools.

Solution

A Model Detection Dashboard

The Problem

LLM hallucination isn't a bug. It's a trust crisis.

In 2024, enterprises were starting to seriously adopt LLMs — but every team we talked to had the same frustration: the AI sounded confident even when it was wrong. There was no signal, no way to tell a reliable output from a fabricated one.

ReparteeAI's core thesis was to fix exactly this: build a platform that could surface hallucinations inline, so users could verify AI outputs without leaving the workflow.

The real problem wasn't that AI made mistakes. It's that users couldn't tell when it was making them.

Internally, we had our own version of this problem. The company's direction shifted three times in 8 months — AI platform, then ML ops, then Web3 infrastructure. As the UX-focused PM, my job wasn't just to design. It was to keep the product moving when the roadmap kept changing underneath us.

HOW MIGHT WE

How might we reduce uncertainty in AI-generated outputs so users can confidently review, verify, and act — without leaving their workflow?

Problem

When Teams Can’t Trust Their Own Models

Pain Point 1

  • No visibility into how reliable model predictions are

Pain Point 2

  • PMs and execs can’t make data-backed decisions

Pain Point 3

  • ML engineers overloaded verifying results manually

Target User

When Teams Can’t Trust Their Own Models

Pain Point 1

  • No visibility into how reliable model predictions are

Pain Point 2

  • PMs and execs can’t make data-backed decisions

Pain Point 3

  • ML engineers overloaded verifying results manually

Research

Activities

  • Competitive Analysis of 7+ GenAI tools: LangChain, Claude, Pinecone, RunPod

  • Stakeholder Interviews: surfaced confusion around hallucination signals

  • Workflow Mapping: mapped end-to-end user journey in enterprise RAG setups

Insight

Users wanted actionable transparency — not technical complexity.

Product Strategy & UX Architecture

My Contributions

  • Designed an async documentation system to streamline PM–engineer–design collaboration (reduced meetings by 50%)

  • Co-led roadmap prioritization: positioned hallucination detection as our core product wedge

  • Led the Landing Page V2 redesign to clarify product messaging for both developers and investors

Key UX Flows

  • Hallucination flagging & explanation in chat-style UI

  • Confidence score thresholds + inline source matching

  • Dashboard overview: track failure rates across queries

Outcomes & Impact

Metric

Impact

Enterprise user adoption

⬆️ 35%

Retention rate (4 weeks)

⬆️ 20%

Dev/design sync efficiency

⬇️ 50% sync meeting time

Investor narrative alignment

✅ Helped close product-market fit story

Reflection

What I learned

  • How to make LLM risks legible through good UX

  • Balancing vision + feedback + feasibility in ambiguous startup environments

  • Design is not just about screens — it’s also about the story you tell

What If I Had More Time?

  • A/B Test the hallucination alert UI (confidence clarity, placement)

  • Design a “mitigation recommendation” layer (e.g., suggest alternate phrasing)

  • Add user feedback loop on flagged hallucinations to train detection engine

  • Mobile support: Quick preview of dashboards and flagged queries on the go

  • Analytics Drill-down UI: Click to see per-query or per-user patterns

Takeaways

Hallucination detection isn’t just a backend feature — it’s a trust layer. And trust is a design problem.

This project helped me sharpen my ability to:

  • Communicate complex AI problems to users and stakeholders

  • Build clarity-first products in the B2B AI space

  • Translate abstract risks into usable, trustworthy interfaces

Thank you.

Product Builder & Vibe Coder with Design Thinking Mindset

2026. Designed with ♡ in Seattle.

Product Builder & Vibe Coder with Design Thinking Mindset

2026. Designed with ♡ in Seattle.

Product Builder & Vibe Coder with Design Thinking Mindset

2026. Designed with ♡ in Seattle.