INSURANCE TECH
Building trust where price usually wins
An AI-guided layer embedded inside an insurance quote flow - designed to close the trust gap and improve quote-purchase rates, built entirely under ambiguity with no user research, conversion baselines, or internal data access

My Role
Domain
Insurance
Constraint
No Research, No Baseline
Framework
Google PAIR
Problem & Constraints
A conversion problem hiding a trust problem
The goal was to increase quote-to-purchase conversion year over year.
What made it difficult
Price is the dominant decision driver in insurance, and it's the one variable a company can't always win on. Users were arriving comparison-ready — increasingly using tools like Google's AI Overview feature, ChatGPT, Perplexity and other non-traditional channels before ever visiting the site.
The real signal came from sales; clients binding policies were already asking whether rates would rise next year. Not post-purchase anxiety — evidence that the product was failing to build any mental model of how insurance pricing works before the user made a decision.
" I haven't had any claims for years. Why are my rates still going up? "
RECURRING PATTERN - SALES TEAM INSIGHT
What was missing
The project started without the inputs most discovery work relies on. Every design decision had to be justified on logic and principle, without any evidence.
No User Research
NO Conversion Baseline
No Internal Data Access
No Proof of concept
These constraints defined the approach:
Identify the highest-leverage intervention that didn't require new infrastructure, new data pipelines, or a validated hypothesis to justify building.
Current task flow
Approach
A content problem, not a technology problem
Rather than redesigning the quoter or building an AI agent, the approach was to find the single point where trust visibly broke down and treat it as a placement and language challenge. Using Google's PAIR guidebook as a foundation, the guiding principle became: help users form an accurate mental model of what they're buying and the justification behind the cost.
Four concepts were explored. One cleared every constraint bar: a rate transparency module — a dynamic callout triggered by profile inputs, surfacing two or three plain-language explanations of what's driving the user's premium. The content already existed inside actuarial teams. No ML pipeline, no chat UI, no new infrastructure required.
Price dominates insurance decisions not because users are purely price-driven, but because price is the only thing they can easily compare. Making the abstract concrete gives users something to evaluate beyond the number — and that shifts the decision frame.
Testing & Iteration
When data contradicts lived experiences
Light user testing confirmed the rate transparency module as the most well-received concept — but surfaced a credibility gap. Explanations referencing area risk factors were being challenged by users whose personal experience didn't match.
"I haven't seen a single accident in this area in the last year. Why does your data say otherwise?"
USER TESTING FEEDBACK
PAIR addresses this directly through calibrated confidence: a system that asserts statistical facts as ground truth will lose to a user's lived experience every time. Two A/B iterations were designed to test different answers.
Iteration A resolves the credibility gap passively — every factor shows a source panel beneath it with labelled pills for the data source, time window, and geographic radius. Hovering any pill reveals a tooltip explaining what it means. The location factor also gets a "Why this might differ from what you personally observe" callout that explains the scale difference proactively, without waiting for the user to push back. No interaction required — the information is just there.
Iteration B resolves it actively — the copy uses calibrated framing ("Our data suggests…") to signal honest uncertainty upfront, and the location factor gets a "Does this reflect your experience?" prompt with Yes / No buttons. Yes acknowledges and closes it cleanly. No expands a bridge explanation that validates the skepticism, walks through why personal observation and actuarial data diverge, and closes by pointing the user to the positive driver factor below — turning a potential objection into a moment of trust. The page auto-scrolls to the expanded explanation so nothing is missed.
Reflection
What working without a net taught me
From the constraints of this project, a different kind of rigor was needed: building an argument from first principles, staying honest about what was known vs assumed, and designing for iteration rather than completion.
The PAIR framework was useful because it insists on honesty — about what the AI does, what it doesn't know and where human judgement should remain in control. In a domain where users arrive skeptical and have been burned before, that honesty isn't just an ethical consideration.
The minimum viable version ships as three to five contextual callout modules embedded in the existing quoter. Before quote-to-purchase data accumulates, the first thing I'd instrument is module engagement itself — did users read it, expand it, or interact with it at all. I believe this is a meaningful leading indicator that doesn't require waiting for conversion cycles to close.
Next Steps
How this moves forward
I would define a metric that can signal value before purchase data accumulates.
Module engagement — whether users read, expand, or interact with the transparency callout — would be instrumented from day one. A high read-through rate with low abandonment on that step would suggest early signal that the content is building the right context, not creating friction.
I believe Iteration B should only move forward if Iteration A shows strong enough module engagement to justify the added interaction cost since stacking micro-interactions on top of content that users aren't reading would compound the problem rather than solve it.








