Wise Circles
Expense Sharing, Interaction Design, Fintech
2025
Product Design

Project Info
Problem
Travelers manage shared expenses across multiple apps with no way to split and settle in one place.
Solution
Trip-based groups inside Wise where users track, split, and settle expenses in any currency.
Goal
Reduce social friction around shared costs while driving organic sign-ups through group invitations.
Impact
75% of 20 test participants said they'd use the feature. Validated core flow, identified structural IA issues.
What I Owned
In a team of 4:
Led all research: desk research, surveys, competitor benchmarking
Identified the group expense-splitting gap in Wise's product through competitor benchmarking
Contributed to UI design, user flows, and early testing
Designed and ran an independent validation study via Maze (20 participants)
The Brief
The Challenge
The D&AD New Blood 2025 brief asked designers to help Wise reach new users organically. Wise already handles international transfers across 160+ countries and 40+ currencies. The question was: how can Wise create value that brings people to the platform through word of mouth, not ads?
The Constraint
Wise uses the mid-market exchange rate with no markup, making revenue from transparent fees instead of hidden margins. This "no hidden fees" principle became a core constraint: any new feature had to maintain that same transparency, especially around currency conversion.
Who We Designed For
Choosing a Segment

What We Found

The pattern was clear: this isn't just a logistics problem. It's a social friction problem. We weren't designing a calculator. We were designing a way to keep friendships intact.
Pain Points
Across our survey, competitor analysis, and desk research, the same pattern kept showing up. Managing shared expenses during group trips is messy, and six pain points stood out:
Confusion over splits: manually keeping track leads to errors, frustration, or mistrust
Potential group tension: money misunderstandings can dampen the trip's mood and create resentment
Currency confusion: hidden fees or bad exchange rates create fairness concerns
Payment inconsistencies: different banks, different fees, making cost splitting more complicated
End-of-trip stress: large outstanding balances settled last minute or after returning home
Time-consuming logistics: constant calculations, note-taking, or back-and-forth chat messages

The Opportunity
Travelers track shared expenses outside Wise, then return later to settle up. This disconnect creates friction at social moments like dinners or group bookings. Bringing expense tracking and settlement into one place reduces that friction for users, while creating natural opportunities for others in the group to join Wise. Our hypothesis: if settling requires a Wise account, every circle becomes an invitation.

This led us to our design challenge:
How might Wise make group spending easy, so travelers split costs, not connections?
What We Prioritized
Shaping the Solution
To align the team and define a clear product direction, we used Airbnb's 11-star framework to visualise the end-to-end user experience. This helped us stretch beyond the expected and identify where the product potential sat: centralise group spending, anticipate user needs, and support real-time tools.
Using MoSCoW, we cut a lot. WhatsApp integration was too complex to validate in time. Receipt scanning addressed the wrong friction. The real problem wasn't recording amounts, it was splitting them fairly. We kept the focus tight: track, split, settle.
We mapped the full user journey across three phases before building anything. Before: awareness and group setup. During: expense logging, splitting, and real-time management. After: settlement and close-out. This gave us the full picture before we touched Figma
What We Designed
Overview
Wise Circles has six steps across three core flows: creating a group, adding and splitting expenses, and settling up. Each one required trade-offs between simplicity, transparency, and the brief's growth goal. All screens follow Wise's existing design system to ensure consistency with the platform.
Overview
Circles
Add & Split
Settle
One Year Later
Why I Went Back
The D&AD project ended in March 2025. Nearly a year later, in February 2026, I went back. We had classmate feedback suggesting the design worked, but I wasn't confident. The testers knew us, knew the project, and were forgiving. I wanted to know if the design actually held up with strangers.
I wanted to answer three questions:
Can users complete the core flows?
Where do they get stuck?
Would they actually use this?
On my own initiative, I designed and ran a formal usability study using Maze with 20 participants recruited from Maze's panel (ages 25–40, travelers from UK and EU). No classmates, no friends, no safety net.
What I Tested
I tested the three core flows of Wise Circles with 20 participants on desktop via Maze. The prototype was designed for mobile, but at the time I didn't know Maze supported mobile testing.
Create a Circle
Add an expense and split it
Settle expenses with others

2026 Insights
How Users Actually Split Expenses
Before testing the prototype, participants answered an open question I included in the Maze study: "How do you usually split travel expenses?"
One person pays, settles later. The most common pattern. One person covers costs, reconstructs totals post-trip, then requests bank transfers.
Manual tracking is the default. Notes apps, mental math, spreadsheets. Only a few mentioned dedicated tools like Splitwise. Most rely on memory.
Currency friction is real. One participant opened Revolut just to split costs between an American and European bank account. Cross-border settlement is a pain point Wise is uniquely positioned to solve.
Social dynamics shape the method. Friend trips require careful tracking to avoid conflict. One participant noted that poor expense management "can turn into something that would prevent future travels."
This confirmed our hypothesis: the problem is real, it's social, and existing tools aren't solving it.
Results
Overview
After completing all three tasks, participants rated the feature on ease of use and overall interest. The high-level numbers look reasonable. But they hide the real story. The screen-level data told me exactly where the design broke.
Task 1: Create a Circle
Users struggled at the start and end of the flow. On the home screen, clicks were scattered everywhere because nothing pointed them toward creating a Circle. On the invitation screen, they didn't know where to tap to move forward. Most got there eventually, but not naturally. The concept was clear, the entry point wasn't. Only 30% said it was very easy to create a Circle.

Task overview:
How easy was it to create a circle?
Task 2: Add Expense & Split
85% success rate, 58.7% overall misclick rate. This was the longest flow and contained the study's most critical usability failure:
Screen 4 had a 100% misclick rate and scored 37, but the data needs context.
The Maze task asked users to "add expenses" without specifying that the expected path involved selecting two existing expenses and adding a third as a custom expense. Most participants selected from the list and tried to continue, with no reason to look for the second option.
The 100% misclick rate reflects ambiguous task framing more than UI failure, though the screen does have two competing actions that could benefit from clearer visual separation.
What works: Once users get past expense selection, the split flow is strong. The "How to split?" prompt (score: 91) and the equal split screen (score: 94) validated that the step-by-step approach works. The custom expense form (score: 94) was also near-frictionless.

Task 3: Settle Expenses
80% success rate (lowest of the three tasks), 60.7% overall misclick rate.
The problem screen: Screen 4 had an 85% misclick rate and scored 39. After settling one expense, users couldn't find how to settle the next. The 14-second average time suggests the settle action isn't where users expect it. The chat-based UI makes it hard to locate specific expenses when there are multiple to settle.
What works: The Quick Settle concept itself is validated. Once users reach the settle bottom sheet (Screen 3, score: 85) or bulk settle view (Screens 6–7, score: 100), the interaction is clear. The problem is getting there, not the settlement flow itself.

What Users Said
Positive:
"Quick settle auto calculates is great"
"Just Settle or Request — no IBAN, no fingerprint, nothing extra."
"Split expenses at the click of a button, no messing around"
"The chat function looked useful so you can add context"
Critical:
"I couldn't see an overview of how much I owe someone before I settle the expenses"
"I am missing a smaller overview with all the expenses, just a simple list and clickable for more details. Now it is all in some sort of chat which makes me lose a little bit of oversight."
"It wasn't necessarily clear to me where I should be looking for each task. I did like using the app though once I could figure out where to go."
"Too many options and the layout felt messy"
What the Data Means
The data tells a clear story: the core value proposition is validated, but the information architecture needs work.
75% would use this feature. The concept of track-split-settle in one place resonates. The settlement mechanics work (Screens 6–7 scored 100, Quick Settle scored 85). The split flow works once you're in it (Screens 8–9 scored 91 and 94).
But two screens failed hard (scores of 39 and 48), and they share the same root cause: users can't find what they need inside a chat-based interface. The settle action is buried in the conversation. The circle entry point isn't obvious on Wise's home screen. A third screen (score: 37) had the worst misclick rate in the study, though that was partly inflated by how I set up the Maze task. A learning about test design as much as UI design.
This isn't just a usability problem. If users can't complete settlement smoothly, they're less likely to invite others into the loop. The IA failure weakens the growth mechanism the whole feature is built around.
The fix isn't incremental polish. It's structural. A dedicated expense overview outside the chat, where users can see all expenses, balances, and settle actions in one scannable view. The chat should supplement the experience, not be the primary interface.
I deliberately stopped at diagnosis. The goal of this study was to validate whether the concept held, not to redesign the product. Iterating on the IA without retesting would just be guessing with better wireframes. The next step would be to prototype an overview-first structure and run a second round of testing to see if the navigation problems disappear.
Reflections
What Worked
Three things stood out:
MoSCoW forced us to cut ideas we liked and kept scope realistic. That discipline meant I could test the full flow end-to-end instead of fragments when I went back.
The split flow validated a design decision from early testing. Users got confused when all options were on one screen, so we broke it into steps. Maze confirmed this: the split prompt scored 91, equal split scored 94.
Going back to run validation gave me real data, not just classmate opinions. Screen-level Maze data revealed problems that high-level metrics hid. A 90% success rate sounds good until you see 100% misclick rate on a single screen.
What I'd Do Differently
If I ran this study again, I'd change four things:
Define success and failure criteria before testing, not after. I interpreted the data reactively instead of measuring against predefined thresholds.
Give users a dedicated expense overview instead of embedding everything in chat.
Test on mobile. Desktop-based Maze testing may have inflated some misclick rates.
Write clearer task prompts. The "Add expense" task conflated two actions on one screen, inflating the misclick rate and muddying the signal.
Key Learnings
75% "would use" is good, but the 25% matters more. The people who said no pointed to real usability issues, and the screen-level data confirmed exactly where those issues live. A product can have the right value proposition and still fail on information architecture. In future projects, I'd give users a clear overview of actions and content before embedding them inside conversational UI, and I'd pay closer attention to how users navigate between tasks, not just within them.



