Personal Workspace

/

Simulated Agents

/

Database Code #3

Improve conversion from free users to Super Duolingo

Date: 11/19/25

Time spent querying: 1hr 2m

Accuracy: 73%

N= 12,483

COHORT

IPG Conversation Agent

COHORT

SENTIMENT SCORE 􀫱

0.82􀣉

TOTAL FEEDBACK VOLUME 􀫱

29.3K

􀜓

FILTER

SHOWING 4 PATTERNS OUT OF N = 12,483 USERS AND 12 SOURCES

Users describe lessons as repetitive after Day 3.

Repetition Fatigue

􀁵

Agents: 6,128

Prevalence: High

Confidence: 0.82

Users are overwhelmed by the first time user experience.

Onboarding Cognitive Overload

􀁵

Agents: 5,303

Prevalence: High

Confidence: 0.81

Users say that gamification isn’t consistently motivating.

Motivation Dependence on Gamification

􀁵

Agents: 2,561

Prevalence: Medium

Confidence: 0.74

Users don’t believe that the value is worth the price.

Low Perceived Value

􀁵

Agents: 974

Prevalence: Low

Confidence: 0.72

Filter By:

􀀳

Common Patterns

Question Number

Users are overwhelmed by the first time user experience.

Onboarding Cognitive Overload

􀁵

Agents: 5,303

Prevalence: High

Confidence: 0.81


INFINITE PREDICTIONS GROUP (IPG)

ROLE

Founding Product Designer

SKILLS

Product Strategy

Systems Thinking

Cross-Functional Leadership

TIMELINE

3 weeks

OVERVIEW

Building AI simulations of company target users, so product managers can capture insights at 1/100th of the original time.

With a model based on real human data, IPG allows PMS to continuously engage with simulated human agents and make faster, more informed decisions.

THE SOLUTION

Using real human data, generate thousands of people at once, and interact with them to gain insights.

#1 FIND PATTERNS.

#2 TALK TO USERS— ASAP.

#3 UNCOVER DEEPER INSIGHTS.

#1 FIND PATTERNS.

CURRENT SPACE

Small product teams have never had more data, and yet they know less than ever about their users.

Even the best PMs rely on fragmented, biased, or stale signals to make decisions. Traditional user research is slow, expensive, and limited to a tiny fraction of the actual user base.

SO WHAT?

Access to users — and the ability to learn from them quickly — is structurally scarce and operationally heavy.

Every product team today faces the same systemic limitation: they can only talk to a fraction of their users, a fraction of the time.

Only 1/5 of users

20%
40%
60%
80%
100%
THE PROBLEM

User research is expensive and time-consuming for small teams, leaving many PMs to rely on proxy or intuition.

THE GOAL

Enable PMs to access user-grounded insights continuously without requiring dedicated research teams or long timelines.

THE GUIDING QUESTION

How can we seamlessly eliminate research bottlenecks so PMs can move faster with confidence?

AUG

APR

MAY

JUN

JUL

LOW CONFIDENCE

TIMELINE BOTTLENECK

USER RESEARCH

DISCOVERY PHASE

VALIDATION PHASE

DEFINITION

INTERNAL

RESEARCH

INSIGHT

PMs don't simply want mass data— they need the insights that matter. But how do they get there?

How do we design a workflow that returns insights quicker than traditional cognitive processes?

LIGHTBULB MOMENT

Real PM workflows are nonlinear. LLMs allow PMs to generate data and explore them however they want, furthering autonomy.

Mystery

Heuristic

System

Insight

Knowledge Funnel

EARLY EXPLORATION

PRODUCT THINKING

How should PMs explore thousands of responses?

Exploring the best possible interaction model for a PM's workflow.

ITERATION #1

Traditional filtering was too heavy-weight.

Personal Workspace

/

Simulated Agents

/

Educational Users

ipg

P...

􁗅

􀋥

􂄹

Improve conversion from free users to Super Duolingo

Date: 11/19/25

Time spent querying: 1hr 2m

Accuracy: 73%

N= 12,483

FILTERS

Filter out your view of the databases.

SEGMENT

All Segments

>

APPLY FILTERS

>

CONFIDENCE

􀜓 USER ID

􀜓 LENGTH

􀜓 SENTIMENT

􀜓 QUALITY

􀫱

USER #1

10m15s

Negative

High

USER #2

7m43s

Average

Average

USER #3

2m15s

Average

Low

USER #4

2m57s

Negative

Medium

USER #5

8m32s

Good

Medium

USER #6

2m15s

Average

Low

USER #7

3m24s

Negative

High

USER #8

2m17s

Good

Low

USER #9

5m29s

Negative

Average

USER #10

7m45m

Average

Medium

USER #10

2m15s

Average

Low

USER #10

7m45m

Average

Medium

I hypothesized that PMs would want direct access to individual responses so they could maintain trust in the raw data. However, it requires the PM to already know what they're looking for.

ITERATION #2

A chatbot as a filter results in bias.

PMs had to know what to ask the chatbot, which resulted in a prompting bias. If they looked specifically for "Pattern A", they would've never known "Pattern B" was a problem!

ITERATION #3

A single chat interface felt like a black box and lacked ethos.

A lone “ask anything” box gave no sense of structure or grounding. It offered answers, but not the transparency or credibility PMs needed.

ITERATION #4

Emerging patterns from the system

Personal Workspace

/

Simulated Agents

/

Database Code #3

Improve conversion from free users to Super Duolingo

VIEW REPORT

Date: 11/19/25

Time spent querying: 1hr 2m

Accuracy: 73%

N= 12,483

COHORT

IPG Conversation Agent

COHORT

SENTIMENT SCORE

0.82

􀣉

TOTAL FEEDBACK VOLUME

29.3K

􀜓

FILTER

SHOWING 4 PATTERNS OUT OF N = 12,483 USERS AND 12 SOURCES

Onboarding Cognitive Overload

􀁵

Onboarding Cognitive Overload

􀁵

Onboarding Cognitive Overload

􀁵

Onboarding Cognitive Overload

􀁵

I hypothesized that by grouping responses into patterns first, it would reduce complexity and help PMs start from higher-level insights instead of raw volume.

However… there were some tradeoffs with emerging patterns:

  1. Trust

Why should a PM trust a pattern they didn't find themselves?

  1. Comprehension

How does a PM go from seeing a pattern to understanding why it exists?

KEY QUESTION:

How might we help product managers trust patterns enough to act on them, without constraining their autonomy?

SOLUTION:

A two layer system!

The overview stays intact while the PM investigates. They can go deep on one pattern and come back without losing their place.

SYSTEMS THINKING

Connecting the two surfaces

PMs move between the overview and detail constantly. What helps them decide which pattern to open?

SOLUTION:

The pattern card

Jumping from the index to the full detail view is too much, too fast— but the card bridges that gap.

Repetition Fatigue

􀁵

Users consistently describe lessons as becoming repetitive after Day 3 of the program. Multiple segments report declining engagement tied to perceived redundancy in content structure and pacing.

Matched Users:

6,128

Prevalence:

High

Confidence:

0.82

Repetition Fatigue

􀁵

6,218

MATCHED USERS

PREVALENCE

High

CONFIDENCE

0.82

Users are overwhelmed by the first time user experience.

Onboarding Cognitive Overload

􀁵

Agents: 5,303

Prevalence: High

Confidence: 0.81

I explored what info a PM needs at a glance to decide if a pattern is worth opening.

CARD EXPLORATION #1

Prioritizing context for a pattern

CARD EXPLORATION #2

Prioritizing comparison

FINAL DESIGN

Balancing both context and scannability

HOME STRETCH!

Final Solutions

After establishing the interaction system and components, I translated them into full flows:

FLOW #1

INTERVIEW CAMPAIGN CREATION

CONTEXT-AWARE QUESTION GENERATION

Kickstart research in seconds.

PMs describe their goal, and the system automatically generates targeted, context-aware interview questions grounded in user behavior, evidence, and most importantly: your goals.

REVISE, REFINE, FOLLOW-UP

Edit, refine, and deepen your questions with a single prompt.

They can refine, adjust tone, or add follow-ups — ensuring every question maps back to the insight they’re trying to uncover. Recommendations from artificial intelligence are not primary, but rather, a secondary suggestion— for a further sense autonomy for PMs.

ITERATE SMARTER AND FASTER.

Generate a complete interview guide in an instant.

PMs have a research starting guide in only seconds, without the blank page.

FLOW #2

TALK TO YOUR USERS

DYNAMIC CHAT INTERFACE FLOW

Receive instant answers from the simulated agents.

After the interview, users can talk to the interviewed data-base, or talk to specific cohorts of similar patterned individuals.

HYBRID FILTERING SYSTEM

Segment the users and patterns which you want to interact with.

Users can talk to the interviewed data-base, or talk to specific cohorts of similar patterned individuals,

GRANULARITY WITHIN DATA

Zoom into any segment in one click— the dashboard shifts dynamically according to PM needs.

Instead of overwhelming PMs with hundreds of filters, the system reveals insight layers progressively. Each step gets more specific only when PMs ask for it.

USER-LEVEL DEEP DIVES

Truly— any user.

PMs can click any individual user in a cohort to inspect their personal habits. It takes what would normally require hours of 1:1 interviews and condenses it into a deeply contextual profile.

Reflections

  1. Balancing AI and design.

Building with agentic systems has been quite intriguing. It's important in knowing when to stay out of the way. Too much adaptation and people feel like they’ve lost control. Too little defeats the point.

  1. Always design in systems!

Nothing was designed in isolation — every component depended on one another. It's important to design for scalability and further expansion of the interaction system versus siloing into one feature.

  1. Prioritize ruthlessly.

Before diving into the project, it was important to be cognizant of our business goals (and constraints!) at the moment to understand what needed attention, and what didn't.

HELLO THERE!
Always lookin' for interesting projects to work on. Reach out
and let's grab a coffee!

FIND ME AT: