Designing a Mobile-First AI-Guided Solar Purchase Experience

TIME

12 Month

ROLE

pRODUCT dESIGNER

Team components

mANAGEMENT

Product mANAGER

Development

Front-end developers

SEC.

/OVERVIEW

tHE overview

The Context

SunnyAI replaced the traditional solar sales rep — a mobile-first, AI-native platform taking homeowners from address input to signed contract, guided by AI.

Solar is a high-consideration, long-commitment purchase. Operating as an AI-powered sales tool for local installer partners, we still couldn't get users through the flow — initial conversion was **0%**. With ~80–90 users entering monthly and a 10-sale target, something in the experience was failing at every step.

I led the end-to-end experience as the sole designer from 0→1 — every flow, from quoting through financing.

Initial Hypothesis — And Where It Failed

We optimized for speed: fast, self-serve, cheaper than competitors. Users engaged — explored sizing, reviewed projections — but nobody completed financing.

Engagement was there. Completion wasn't.

Funnel analysis and session recordings revealed three drop-off clusters, each a different trust threshold: PII collection, proposal review, financing.

SEC.

/Breakdown 1


Breakdown 1 — PII Friction: Commitment Before Clarity

Method: Funnel Analysis + Marketing Expert Consultation

Funnel data showed a sharp drop-off before users had seen a proposal. Our hypothesis: we were asking too early — requesting personal data before demonstrating any value.

Early Version – Long Form on the landing page

We consulted a marketing specialist focused on conversion and user acquisition, who confirmed it and introduced the concept of the "yes-train": each small commitment lowers resistance to the next, but only if the user has received something in return first.

Design rationale:

Instead of collecting all inputs upfront, I broke the flow into single-question pages. Each page explained why we were asking — a short line of context directly beneath the input — and the next page delivered a value beat in return.

The form wasn't shortened. It was restructured: one ask at a time, reason given, value returned — a commitment ladder where every yes was earned, not extracted.

Early Version – Long Form on the landing page

Result

Early abandonment decreased. More users reached the proposal stage — exposing the next problem.

SEC.

/Breakdown 2

Breakdown 2 — Proposal Friction: Invisible Scaffolding

Method: Session Recordings + Internal SME Workshop

Session recordings showed users scrolling the proposal, pausing on complex metrics, and exiting. The AI assistant was there — a floating "Ask any questions" button — but no one clicked it unprompted. Only when observation sessions directed users to try it did they engage, and the response was consistent: they found it genuinely helpful.

*Floating Chat widget in the pervious experience

The diagnosis:

Not an AI quality problem. A surface and discoverability problem. In high-cognitive-load situations, optional tools get ignored. Drawing on my prior experience in a solar organization and a domain SME on the team, the hypothesis formed quickly: users had lost the guided walkthrough that traditionally made proposals navigable.

Design rationale:

The floating button had to become part of the proposal's visual language. I redesigned entry points from a single floating trigger to contextual embedded cues throughout — interactive tooltip-style elements attached to specific proposal sections (system size, savings projection, financing terms) that opened the AI chat with a pre-loaded prompt relevant to that section.

The pre-prompt library came from a cross-functional team session — engineers, founders, PM — where everyone walked through the proposal and surfaced every question a user might have. That set became the contextual suggestions at each entry point.

One direction we rejected first: proactive AI prompts surfacing automatically. In practice, pop-ups competing with content felt intrusive on an already-constrained mobile screen. The final direction kept prompts passive but highly visible — always available at the point of confusion, never interrupting.

*Floating Chat widget in the pervious experience

AI engagement increased. Contextual prompts converted previously ignored content into active AI interactions.

*3D site model generated with auto-gen and edit tool, the image shows user edit the roof in 3D environment

SEC.

/Breakdown 3

Breakdown 3 — Financing Architecture: Transparency Without Hierarchy Is Noise

**Method: Session Data + Domain Knowledge + Competing Layout Sketches + Iterative Design Review + Cross-Industry Competitive Analysis**

In traditional solar sales, the rep does the confidence-building work — helping customers understand what each option means for *their* situation, not just listing numbers. We had removed that layer entirely. Users weren't missing information. They were missing the ability to translate that information into a decision they felt good about.

Our design goal became concrete: a user who walks away understanding what each plan *protects them from*, not just what it costs — confident enough to commit without second-guessing.

Address

manual Define

ML-GEN

manual fix

result

**The design problem had three interlocked dimensions, and we had to solve them as a system.**

The first was a UX and product tension at once: a stakeholder proposed auto-recommending subscription and reducing visible options to simplify the decision. I pushed back — in this context, limiting choices doesn't read as helpful, it reads as steering. Users needed to feel they were choosing, not being guided toward what benefited the business. That advocacy and the UX problem were the same question: *how do you show three plans fairly on a mobile screen, without making comparison feel like work?*

The answer was a tab architecture — Cash, Loan, and Subscription as equal top-level tabs with identical layout and instant switching. Stacking three plans vertically on mobile would have forced excessive scroll, breaking the user's mental thread between options. Tabs preserved comparison without demanding effort. The system could guide — it could not trap.

The second dimension

The second dimension was information hierarchy. I sketched four competing layout structures and brought them into design review. Primary/secondary layering won because it forced a single clarifying question: *what three numbers does a user need to hold in their head when they close this page?* From that: monthly payment, monthly savings, before vs. after utility bill. Those three became visually dominant in every plan tab. Everything else — projections, rate assumptions, compliance details — moved into expandable secondary layers. Nothing removed. Everything structured around what actually drives a decision.

*Selected screen shot of Pre-defined shapes approach flow

The third dimension

The third was how each plan communicated its real value. "Lease" carried heavy negative associations in solar — asset extraction, lock-in, complexity. We renamed it "subscription," which more accurately reflected what it was: maintenance included, warranty covered, predictable monthly cost. But naming alone didn't move behavior. Cross-industry analysis of SaaS and home services revealed why: subscription models win not on ROI, but on *risk reduction*. I redesigned the plan card presentation to lead with what subscription protects against — zero upfront, no surprise costs, no ownership complexity — rather than competing on long-term savings numbers it would lose.

SEC.

/The Outcome

The Outcome

Quote submission reached **26%** in the first month post-launch, up from 0%. Against ~80–90 monthly users and a 10-sale target, this was meaningful first evidence the trust architecture was working. The experience shifted from automated transaction to structured decision support — users leaving the financing stage able to clearly articulate what they'd pay and save.

**If rebuilding:** The subscription messaging evolution was informed by funnel data and behavioral observation, not formally A/B tested. That's the piece I'd validate more rigorously.

What I learned

01.

Earning commitment is a sequencing problem

Users don't resist giving information; they resist giving it before they've received something worth trusting. The form wasn't the barrier. The order was.

02.

AI integration is a structural decision, not a feature placement

Embedding the assistant as a floating button put it outside the experience; embedding it inside the proposal's content layer made it part of the decision.

03.

Confidence is not the same as information

Users had everything they needed to decide. What they were missing was a way to translate numbers into a choice that felt right for their situation.

Email • WAGAODESIGN

  • the letter m is made up of colorful shapes
  • Email • WAGAODESIGN

  • the letter m is made up of colorful shapes

/WAGAO.DESIGN

©2024