The cost of UX: balancing cost, expertise, and impact

Let’s talk about the real cost of UX design beyond the numbers and into what actually matters

Hand pointing to chart on paper with a calculator next to it
Photo by Jakub Żerdzicki on Unsplash

After years of UX design work, I’ve learned something interesting: it’s not always about the user. Sure, desirability is crucial for end-customer products, but when you’re dealing with business-to-business tools, internal systems, or complex enterprise solutions, what’s feasible and viable often outweighs pure desirability.

I’ve had my share of battles trying to convince stakeholders that user research is valuable. That’s why understanding the business side of design and the impact we make is crucial. We need to get comfortable with discussing risk, cost, and expected outcomes in ways that make sense to business leaders.

And no, saying “good design is good business” isn’t enough. We need to prove it-or at least try to.

In this article, I’ll share a practical model I’ve developed for understanding UX ROI by balancing cost-efficiency, risk, user testing, and the impact of design expertise.

Metric for UX cost efficiency

The real cost of design work

Let’s start with the basics. Here’s a simple formula I use to calculate the direct cost of UX design work:

Cost of Work (COW) = H × T × (1 + R)

Where:

H = Hourly rate (junior or senior designer)

T = Time required to complete the task (in hours)

R = Revision factor (typically 0.2–0.5 for junior designers, 0.1–0.2 for senior designers)

I’ve added this revision factor because, in my experience, junior designers typically need more revisions, making their effective cost higher than their hourly rate suggests.

Designer Type | Hourly Rate (H) | Time per Task (T) | Revision Factor (R) | Total Cost (COW)
- - - - - - - - - | - - - - - - - - - | - - - - - - - - - - -| - - - - - - - - - - - - | - - - - - - - - - -
Junior Designer | $50 | 10 hours | 0.4 | $700
Senior Designer | $100 | 4 hours | 0.15 | $460

At first glance, junior designers might seem more cost-effective due to their lower hourly rates. But in my experience, the increased time (T) and revision factor (R) they typically need can lead to higher overall project costs. For complex or high-risk projects, I’ve found that the efficiency of a senior designer often reduces overall expenses.

Not all tasks are created equal. A simple landing page redesign is worlds apart from an enterprise dashboard revamp. Let’s adjust for this by introducing task complexity.

A Quick Note: I’m not talking about roles here-a junior designer might have more experience than a senior designer in certain contexts. We’re focusing on the experience gained, assuming that senior designers have more years under their belt and a proven track record.

Understanding task complexity

In my work, I’ve found that task difficulty comes down to these key factors:

– Cognitive load (how much mental effort is needed)

– Sequential dependencies (how steps relate to each other)

– Error sensitivity (what happens when things go wrong)

– Feedback loops (how users know they’re on the right track)

Take configuring Multi-Factor Authentication (MFA) in an enterprise security dashboard. It’s a complex task that requires administrators to understand security protocols, authentication methods, and user access levels. The process involves multiple settings that must be configured in a specific order, where mistakes can lock out users or create security vulnerabilities, and errors might not be immediately visible.

An abstract image of geometric chaps creating a complex pattern
Photo by Resource Database on Unsplash

The task complexity formula

After years of analysing different tasks, I’ve developed this formula to measure complexity:

C = (S × 0.3 + D × 0.2) × SD + (E × 0.3 — F × 0.2)

Where:

S = Number of Steps (weighted 30%)

D = Number of Major Decision Points (weighted 20%)

SD = Sequential Dependencies (a multiplier that adjusts for dependencies between steps)

E = Error Sensitivity (weighted 30%)

F = Feedback Clarity (weighted 20%)

This formula:

– Weights different factors based on their real-world importance

– Emphasizes steps and error sensitivity (because these matter most in practice)

– Maintains the relationship between dependencies and complexity

– Better reflects what I’ve seen in actual projects

A Real Example:

Let’s say we’re setting up MFA with:

– 8 steps (S)

– 4 major decision points (D)

– Strong sequential dependencies (SD = 2)

– High error sensitivity (E = 5)

– Moderate feedback clarity (F = 3)

Here’s how it works:

C = (8 × 0.3 + 4 × 0.2) × 2 + (5 × 0.3–3 × 0.2)

C = (2.4 + 0.8) × 2 + (1.5–0.6)

C = 3.2 × 2 + 0.9

C = 6.4 + 0.9 = 7.3

But here’s the thing: efficiency alone doesn’t tell the whole story. While a senior designer might finish a complex task faster than a junior designer, the real question is: what happens if the design is flawed?

Risk impact

In my experience, understanding the relationship between task complexity and cost efficiency is crucial. A poorly executed low-risk task (like a simple UI tweak) might only need minor revisions. But a flawed high-risk task-think checkout flow, medical device UI, or enterprise security settings-can have serious financial, legal, or usability consequences.

That’s why when we evaluate who should handle which tasks, we should look beyond just task complexity and consider risk impact (R):

The risk impact formula

R = (U × 0.4 + B × 0.4 + T × 0.2)

Where:

U = User Impact (40% weight)

B = Business Impact (40% weight)

T = Technical Complexity (20% weight)

I’ve weighted it this way because:

– User and business impact matter most (40% each)

– Technical complexity is important but often secondary (20%)

– This better reflects what I’ve seen in real projects

Each factor is rated from 1 (low impact) to 5 (high impact).

Real-World Examples

Scenario 1: Minor UI Tweak (like changing button color)

User Impact (U) = 1 (Users barely notice)

Business Impact (B) = 1 (No financial impact)

Technical Complexity (T) = 1 (Simple CSS change)

R = (1 × 0.4 + 1 × 0.4 + 1 × 0.2) = 1.0

Scenario 2: Checkout Flow (high-risk task)

User Impact (U) = 5 (Critical failure, prevents key tasks)

Business Impact (B) = 5 (High financial loss, regulatory risk)

Technical Complexity (T) = 4 (Requires deep expertise, major risks)

R = (5 × 0.4 + 5 × 0.4 + 4 × 0.2) = 4.8

In this case, the risk score for the checkout flow would be 4.8, significantly higher than the 1.0 for the minor UI tweak. This tells us that the checkout flow needs more careful handling and likely requires higher expertise.

A photo of a you girl jumping between two cliffs
Photo by The Chaffins on Unsplash

Risk-adjusted cost (RAC)

Once you’ve figured out the Risk Impact (R), here’s the formula I use to calculate the overall cost efficiency:

RAC = COW × R × (1 + C)

Where:

COW = the original Cost of Work

R = Risk Impact (calculated as above)

C = Task Complexity (calculated as above)

This formula:

Includes task complexity as a multiplier

Better reflects how complex tasks increase risk-adjusted costs

Gives you a more complete picture of true project costs

Real Examples:

For a minor UI tweak (assuming COW = 5 hours, C = 1):

RAC = 5 hours × 1.0 × (1 + 1) = 10 hours

For a checkout flow (assuming COW = 20 hours, C = 7.3):

RAC = 20 hours × 4.8 × (1 + 7.3) = 796.8 hours

By considering both risk impact and task complexity, we get a much clearer picture of the true cost of a task. This helps me make better decisions about when to invest in senior expertise versus using a more cost-efficient solution with a junior designer.

User research as a risk mitigator

In my experience, risk in design doesn’t just come from complexity or technical challenges-it also comes from uncertainty about user behavior. A product might look great from a business or engineering perspective, but if users can’t figure it out, the risk of failure skyrockets.

That’s where user research becomes a powerful risk mitigation tool. By catching usability issues early, it helps reduce rework costs, user frustration, and potential business losses.

Here’s how I adjust the Risk Score (R) formula to account for user research:

R_adjusted = R × (1 — UR) × (1 — C × 0.1)

Where:

UR = Effectiveness of user research (scaled 0 to 1)

0 (No research): Maximum risk remains

0.5 (Moderate research): Risk is reduced by 50%

1.0 (Comprehensive research): Risk is fully mitigated (in theory)

C = Task Complexity (as calculated above)

0.1 = Complexity factor (more complex tasks benefit more from research)

This formula:

– Shows how research effectiveness varies with task complexity

– Demonstrates that complex tasks benefit more from research

– Gives a more realistic view of risk reduction

Real Example: Checkout Flow Redesign

Initial Risk Score (R) = 4.8 (Critical feature, complex integration, business impact)

Task Complexity © = 7.3

No user research (UR = 0):

R_adjusted = 4.8 × (1–0) × (1–7.3 × 0.1) = 4.8 × 1 × 0.27 = 1.3

High risk remains → Needs senior expertise & extensive testing

Moderate research (UR = 0.5):

R_adjusted = 4.8 × (1–0.5) × (1–7.3 × 0.1) = 4.8 × 0.5 × 0.27 = 0.65

Risk is reduced → Still needs validation but is safer

Comprehensive research (UR = 0.8):

R_adjusted = 4.8 × (1–0.8) × (1–7.3 × 0.1) = 4.8 × 0.2 × 0.27 = 0.26

Low risk → Junior designers can implement with minimal oversight

A microscope photo of bubbles in liquid in rainbow clors
Photo by Who’s Denilo ? on Unsplash

Research validity

One of the most important things I’ve learned in UX design is knowing when you’ve done enough testing. This is where Nielsen’s Law of Diminishing Returns for Usability Testing comes in (Nielsen, 2000). It states that the first few usability test participants uncover most usability problems, while additional users reveal fewer new issues. In practice, testing with 5–8 users usually finds most major problems, making usability testing highly cost-effective.

Here’s the formula I use for calculating the Percent of Usability Issues Found (P):

P = 1 — (1 — λ)^n × (1 + C × 0.05)

Where:

λ(lambda) is the problem discovery rate per participant (typically 0.31, based on Nielsen’s research)

n is the number of participants

C is the task complexity

0.05 is the complexity adjustment factor

This formula:

– Accounts for how task complexity affects issue discovery

– Shows that complex tasks need more testing

– Maintains the core relationship from Nielsen’s Law

Real Examples:

For a simple task (C = 1):

– At 5 participants: ~85% of issues found

– At 8 participants: ~95% of issues found

For a complex task (C = 7.3):

– At 5 participants: ~92% of issues found

– At 8 participants: ~98% of issues found

Beyond 8 participants, the ROI in terms of discovering new issues drops sharply, making additional users less cost-effective.

In my experience, cost-efficiency in UX design is crucial because the more users you test, the higher the cost. However, most usability issues surface early in testing, making it more effective to test 5 users, fix the issues, and then retest rather than testing 20 users all at once.

This iterative testing approach lets you improve the design faster and more cost-effectively. However, there are exceptions where more users are necessary:

– For diverse user groups, you might need separate tests for different personas

– Quantitative metrics (like A/B testing) need larger sample sizes (50–100 users)

– Edge cases or accessibility testing might need specialized users (like screen reader users)

Cost of user testing

Let me break down the real costs of user testing based on my experience. These costs vary depending on multiple factors like test complexity, participant numbers, required resources, and the tools or services used. It’s crucial to factor in all these elements when budgeting for usability testing.

Here’s the formula I use to calculate the Cost of User Testing (CUT):

CUT = (H × T × n × (1 + C × 0.1)) + Recruitment + (n × Incentive)

Where:

H = Hourly rate of facilitator/moderator

T = Time per test session

n = Number of participants

C = Task complexity

Recruitment = Base recruitment costs

Incentive = Per-participant incentive

This formula:

– Accounts for how complexity affects testing time

– Includes both fixed and variable costs

– Better reflects real-world testing expenses

Real Examples:

For a simple task (C = 1):

– 5 participants

– $100/hr facilitator

– 2 hours per session

– $50 recruitment

– $25 per participant incentive

CUT = ($100 × 2 × 5 × 1.1) + $50 + (5 × $25)

CUT = $1,100 + $50 + $125 = $1,275

For a complex task (C = 7.3):

– 8 participants

– $150/hr facilitator

– 3 hours per session

– $100 recruitment

– $50 per participant incentive

CUT = ($150 × 3 × 8 × 1.73) + $100 + (8 × $50)

CUT = $6,228 + $100 + $400 = $6,728

While user testing might seem expensive upfront, I’ve found it’s highly cost-effective in the long run. Early identification of issues lets teams make quick corrections, often at a much lower cost than fixing problems after launch.

Post-launch fixes can be significantly more expensive, involving costly design changes, development time, and potentially brand damage or lost users.

By conducting iterative testing (small batches of tests with users), you can catch and fix problems early in the design process, reducing the need for expensive fixes after the product is live.

A photo of two people's feet standing in front of the text “Passion led us here” written on the ground
Photo by Ian Schneider on Unsplash

Expert VS user testing in UX design

In my experience, there are two main approaches to usability testing: Expert Testing and User Testing. Both are valuable but serve different purposes depending on the design stage, product type, and test goals.

Expert testing

Expert testing involves having UX designers, usability experts, or subject matter specialists evaluate the product based on their knowledge and experience. This type of testing is usually done without actual users interacting with the product.

What I’ve Learned About Expert Testing:

– Quick & Cost-Effective: No need to recruit participants or wait for test sessions

– Professional Insight: Experts can spot high-level usability issues quickly

– Early in Development: Great for early stages when you’re still prototyping

– High-level feedback: Focuses on general usability issues

The Downsides I’ve Seen:

– Lack of Real-World Feedback: Experts might miss specific user needs

– Subjectivity: Personal biases can influence evaluations

– Limited in Depth: Can’t fully capture user-specific challenges

The Expert testing risk formula

Risk = R / [1 + P × (1 + C × 0.1)]

Where:

R = Expert risk factor (usually 0.2 to 0.8)

P = Problem discovery rate (usually 0.3 to 0.5)

C = Task complexity (e.g., 1–10)

This formula:

– Shows how complexity affects expert effectiveness

– Demonstrates that experts are more valuable for complex tasks

– Provides a more nuanced view of expert testing risk

User testing

User testing involves having real users-who match your target audience-interact with the product. These users are observed while completing tasks, and their behaviors and reactions are recorded for analysis.

What I’ve learned about user testing:

– Real User Insights: Reveals actual usability issues users face

– Real-World Data: Involves actual people who will use the product

– Unbiased Feedback: Users offer authentic feedback about struggles

– Exploration of Edge Cases: Users might interact in unexpected ways

The Challenges I’ve Faced:

– Cost and Time-Consuming: Requires significant resources

– Logistical Challenges: Can be difficult to organize

– Limited Scope: Small user groups only uncover some issues

The user testing risk formula

Risk = R / [1 + P × (1 + C × 0.2)]

Where:

R = User risk factor (usually between 0.1 and 0.3)

P = Problem discovery rate (typically between 0.6 and 0.8)

C = Task complexity (e.g., 1–10)

This formula:

– Shows that users are more effective at finding issues in complex tasks

– Reflects the higher discovery rate of user testing

– Provides a more accurate risk assessment

Real Example:

R = 0.2

P = 0.75

C = 7.3

Result: Risk_User = 0.2/(1 + 0.75 × (1 + 7.3 × 0.2)) ≈ 0.05

This indicates very low risk because real users are more likely to uncover usability problems than experts, especially for complex tasks.

Comparing expert design, user research and junior design

In my experience, not all design tasks need the same level of expertise, and not all projects require extensive user research. Understanding where each approach fits best within a cost-effective framework has helped me make smarter design decisions.

Expert design vs. junior design: finding the right fit

A Real Scenario:

Let’s say we’re creating a new UI for a security system with a moderately complex user flow (like an admin panel for managing user roles and permissions).

What I’ve Learned:

– Expert Designer: Can complete tasks faster with fewer revisions

– Junior Designer: Needs more time, guidance, and iterations

Real Costs:

Role | Hourly Rate | Estimated Time (Hours) | Revision Factor | Total Cost
- - - | - - - - - - -| - - - - - - - - - - - -| - - - - - - - - - | - - - - - -
Expert Designer | $150/hr | 12 hours | 0.15 | $2,070
Junior Designer | $75/hr | 24 hours | 0.4 | $2,520

My Analysis:

In this case, the expert designer ($2,070) actually costs less than the junior designer ($2,520) because:

– Lower revision factor (0.15 vs 0.4)

– Faster completion time (12 vs 24 hours)

Conclusion

I’ve spent years developing these formulas as a way to understand the complex relationships in UX design. While I never actually calculate these numbers in my daily work, they’ve become a valuable mental framework that helps me navigate design decisions and communicate with stakeholders.

What makes these formulas powerful isn’t their precision, but how they help me understand the trade-offs between cost, expertise, and impact. They’re like a compass that helps orient my thinking, not a rigid map that tells us exactly where to go.

UX design is both art and science. While these formulas help us understand the relationships between different factors, they can’t capture the full complexity of real-world design decisions. Every project brings unique challenges that require experience, intuition, and good judgment.

The real value of this framework lies in how it helps me think about and discuss complex design decisions. It’s a tool for understanding, not a replacement for experience. The best UX decisions come from balancing analytical thinking with creative intuition — something no formula can fully capture.

References

Nielsen, J. (2000). Why You Only Need to Test with 5 Users. Nielsen Norman Group. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/

Nielsen, J. (1993). Usability Engineering. Academic Press.

Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems, 206–213.

Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers, 35(3), 379–383.


The cost of UX: balancing cost, expertise, and impact was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *