Can we assess the morality of our UX?

From pixels to principles, from code to conscience

Image of half cat / half machine
Image: Midjourney

Morality is a combination of personal and social standards. It allows us to behave acceptably in society. What morality really means is something philosophers and the like have been reflecting on for ages.

Writer and scientist Jonathan Haidt is among them. I discussed his Moral Foundations Theory in my article “A moral divide: why progressives and conservatives don’t get each other.

I have a few problems with Haidt’s model, but I will not explore these in this article. I will, instead, discover how we can use his model to identify the moral choices we make when designing digital products.

I will do this by going through the five foundations Haidt uses to identify morality:

  • Care / Harm
    How much do we take the suffering of others into account?
  • Fairness / Cheating
    How much is proportionality, merit, and being ‘just’ important to us?
  • Loyalty / Betrayal
    How many obligations do we have to our group, family, or country?
  • Authority / Subversion
    How much do we need to obey and respect hierarchy, parents, police, etc.?
  • Sanctity / Degradation
    How important is it to control our desires and have virtues of chastity?

The purpose of this article is not to identify the balance between capitalistic and moralistic values. I will use Haidt’s moral model to explore if we can use his standards to assess the morality of our products.

1. Care / Harm

Our moral compass is primarily guided by whether our actions cause harm to others. Almost all of us score high on this care foundation. As a result, as a society, we find it immoral to harm someone.

But despite this perceived importance of not hurting people, we design apps that cause financial, emotional, or even physical harm. How much we are willing to accept this depends on our — let’s call it — digital morality.

We can lay out a few examples that illustrate the principle of care.

Uber / Bolt / Grab / Etc.

How acceptable is it that a food delivery app designs screens that put so much stress on the delivery people that they are willing to cycle through red lights to make sure the customer has a pizza on time? Is it OK to penalise the delivery person if the pizza arrived a few minutes late because traffic lights were causing a delay?

Safe Work Australia analysed the state of this industry and noticed:

  • Unsafe systems of work, for example unrealistic delivery times leading to unsafe riding.
  • Economic pressures that may encourage workers to take unnecessary risks.
  • Violence, aggression and harassment including from customers and others.

Next to the food delivery option of Bolt, Uber, etc., taxi-hailing apps have additional moral choices to make.

Should we allow design patterns that push drivers to continue their shifts for longer than physically or legally advised?

In our effort to serve the primary customer, we might forget that the secondary customer can be put at mental and physical risk.

Social Media

There’s enough said about the mental stress that social media platforms put on teenagers. Scrolling patterns, notifications, likes systems etc. are deliberate design choices that lead to addiction, anxiety, and digital social exclusion.

Thematic analysis suggested that adolescents perceived social media as a threat to mental wellbeing and three themes were identified: (1) it was believed to cause mood and anxiety disorders for some adolescents, (2) it was viewed as a platform for cyberbullying and (3) the use of social media itself was often framed as a kind of ‘addiction’.
 —Is social media bad for mental health and wellbeing?

Social media might be partly responsible for the recent deterioration in mental health among teenagers and young adults. It is up to social media platforms, regulators, and future research to determine whether and how these effects can be alleviated.
Social Media and Mental Health

Facebook, and other social media apps, have neglected UX research in order to chase user growth. This has been a deliberate moral choice.


The cycling and workout app Strava allows users to compete in segments. The app tracks your time on a certain part of a road, usually a climb. You can then compare your results with those of the community, including professional cyclists. This gamification works really well. It makes people push their physical limits.

A nice additional feature is that you can follow your efforts in real-time. Your phone, or bike computer, will show during your workout how many seconds you are ahead or behind your goal. For instance, your PR, or time of a friend.

Strava decided to remove this live tracking for downhill segments. You can imagine how this encouraged athletes to take unnecessary risks on the public road. The downhill segments themselves still exist, without live tracking. It’s thus still possible to brag about your average and maximum speed.

Moral products don’t deliberately cause financial, emotional, or physical harm.

Image of half cat / half machine
Image: Midjourney

2. Fairness / Cheating

We want to live in a fair world. For some, fairness implies equality, while for others, it means proportionality: people should be rewarded for their contributions.


The importance of denying access to a certain population plays a vital role in tech. This is something that has to do with fairness. Accessibility is all about allowing users with visual, auditory, physical, or cognitive difficulties to use our products.

Users should be able to use our products with screenreaders, the UIs should contain enough colour contrast, and language should be understandable to all.

Physical products should also be designed to allow them to be used by as many people as possible. I know it’s a very minor issue, but it’s hard to believe how many issues I already encounter as a left-handed person.

The ease with which designers pick colours, without looking at contrast, is a good example of how little awareness our industry sometimes has. With slight colour adjustments, UIs can be made much easier to digest for the colourblind.

People with darker skin struggle with facial recognition software because contrast calibration is optimised for light skin.

A moral product has its accessibility in order. Product teams should be mindful of the challenges they can create for the non-standard user.

Honesty and justice

Justice and honesty go closely together. Is it acceptable to lie to the user? For instance, can a hotel booking platform give you the illusion that a particular room is likely to be fully booked soon, whereas, in reality, there’s enough availability?

What about dating apps that add fake accounts to make (usually male) users believe that plenty of potential dates are available? What if this dating platform creates an algorithm that puts attractive people on top of the stack? This brings us back to the preference for equality or proportionality.


Is it OK for companies to hide certain costs in the checkout process? Amazon is scrutinised for subconsciously nudging people into signing up for Amazon Prime.

The Federal Trade Commission is taking action against, Inc. for its years-long effort to enroll consumers into its Prime program without their consent while knowingly making it difficult for consumers to cancel their subscriptions to Prime.

Many growth teams struggle with the debate about how easy it should be for users to unsubscribe. Another discussion is whether adding hidden costs to the checkout process is okay.

Surveillance and data collection

As users, we sometimes deliberately, but mostly subconsciously, give a ton of data to the companies behind our apps. These companies are harvesting everything they can find about you to sell to data brokers. Those data brokers then make profiles about you and sell them again to companies to make decisions.

This article isn’t about the mechanisms of data as a commodity but about morality. To what degree should apps inform you about the data they collect, and what they afterwards do with it? Most people don’t read the privacy statements and would be shocked to know what’s in them. Should companies be more open about their intentions, or is it OK to hide them in long legal documents written in incomprehensible language?

Moral products show information that is aligned with reality. They also don’t create illusionary engagement, for instance, with fake users.

3. Loyalty / Betrayal

Haidt and I have different ideas about what loyalty means. For him, loyalty is strongly related to family values and patriotism. This is obviously one aspect of loyalty, but universalism and globalism are other shades of loyalty—something he doesn’t include.

Patriotic Branding

Some companies have strong patriotic communication. The Spanish car brand Seat names all its models after local regions (Leon, Ibiza, etc.) Apple’s operating systems are derived from Californian regions or animals (Yosemite, Snow Leopard, etc.) Swiss watch, chocolate, and knife brands often have the Swiss cross in their logo. Sports teams, news websites, and governmental apps usually have a communication style representing their origins.

Does this make these companies more moral than those with neutral colours and branding? I don’t think so. They just use a certain marketing strategy that works for them.

Prioritising domestic products

What about webshops that show foreign products as much as domestic products? Do they have to prioritise the local brands to aid the national economy? Should they favour their commercial objectives and sell as much as possible, or should they miss out on sales and push for domestic products?

Encouraging new or short-term connections

Plenty of apps allow you to expand, or even substitute, your social circle. Solo travelling has never been easier because you can find people to meet on Couchsurfing or other apps. Business networking has moved from the local rotary to LinkedIn. Social media offers a chance for people with unique desires to find like-minded people.

This contradicts the basic principle of loyalty. You can find new connections if you are unhappy with your current ones. Does this make these products immoral?

It’s hard to define what a moral product means regarding loyalty.

This is up to the company and user base to decide.

4. Authority / Subversion

The government, the police, and parents are the most important authorities, according to Haidt. Respect and obedience are a big part of this moral foundation.

How much we have to show respect is related to the cultural dimension of power distance.

Tone of Voice

This cultural divide of authority can be seen in how companies address users. I worked in multicultural environments where this was often a heated debate.

Germanic and Latin languages address someone in a formal way (vous, Sie), or in an informal form (tu, du). In countries with a high power distance, it’s not done to tutoyer (using the informal form) in a digital product.

Choosing between “vous” or “tu” can be compared with having to choose between writing “Dear Mr. Wallet”, or “Hi Bas”.

Your language's formality depends on your target audience's social and cultural conventions.

Who to trust?

Another aspect of authority is how countries look at trustworthiness. Countries that value authority have a tendency to work more with certificates and labels that prove their competencies. Those badges you find on the bottom of websites.

They also like endorsements of politicians or other important people to establish their reliability. Egalitarian countries can use humour and celebrities to increase brand reputation.

A lot of UX is about communication.

Aligning your UX copy with the target user and their cultural norms is important and is related to this moral foundation.

There is not necessarily a wrong and right here.

Breaking or avoiding the rules

There is certainly a group of products that help you to be “creative” with the rules.

  • Some apps inform you where speed checks on the motorway are. This allows you to drive faster than is allowed.
  • VPN apps allow you to bypass the local internet, preventing the government from monitoring your online behaviour.
  • VPNs directly bring us to illegal download platforms like Pirate Bay. These apps help you to break copyright laws.

A case can definitely be made that apps that help you break the law are less moral.

Image of half cat / half machine
Image: Midjourney

5. Sanctity / Degradation

The last foundation Haidt covers is sanctity. He opens the culture war can of worms here. Sanctity means living a pure life according to religious values. Chastity and acting in accordance with god‘s expectations are valued in this foundation.

Just a reminder: my critique of Haidt’s moral model is not part of this article.

Gender and sexuality

When designing digital products, we need to make a fair few choices that can provoke emotional responses. We do this in our sign-up forms.

What options do we offer the user to select a gender? Only male and female or are we allowing more genders?

What about dating apps? Are they binary? Or even more degrading, according to Haidt’s model… should dating apps offer options to indicate that you mostly use the app for a one-night stand? There are even dating apps specifically designed for married couples looking for “fun out of wedlock”.

We could also mention apps like OnlyFans and other ‘intimate’ platforms, but you can put this in perspective yourself.


I think we all know how gambling works, so we don’t have to cover all the details.

What’s important to mention is that dating, porn, and gambling apps are open about their intentions. Can we consider these apps immoral if they don’t use any deceptive design patterns?

Dietary choices

For many people, nutrition is related to purity. Jews, Muslims, and Hindus have their religious diets. Vegetarians and vegans have their idealistic choices. People with allergies have certain ingredients that affect the purity of their body.

How far should apps go in adding filters to find meals that suit your diet? Should those food ordering apps force restaurants to add all this dietary information? This is another aspect where business costs, value, and morality come together.

Can we define the sanctity of digital products? I don’t think so. What’s pure and what's not is different for various user groups.

Can we assess the morality of our digital products?

Probably not. It’s perhaps even difficult to assess the morality of individuals or societies. Several scientists came up with alternatives to Haidt’s model, which illustrates that there isn’t a universal idea regarding what defines good or bad.

We design our products to help our investors, company, or users. Or, hopefully, all of them. The more we become aware of our moral responsibilities, the more we can put our products in the sweet spot of all these actors.

Making it almost impossible to unsubscribe from a product might sound strategically smart, but it can cause reputational damage.

Not making your product accessible might seem logical to reduce development costs, but it can eventually lead to lawsuits and expensive redevelopment projects.

As designers, we often get demands from our head of product, product owner, or anyone with decision-making power. Do these demands always make sense? How do these demands affect the user's emotional, financial, and physical safety?

We should be conscious of how new features can influence the position of our products within Haidt’s moral foundations.

Where you want to position your product within the moral matrix is something you need to decide as a company. At least, you should make this choice deliberately.

Thank you for reaching the end

All my articles are open to everyone, so no “members only” wall
Please consider following me as a token of gratitude

Can we assess the morality of our UX? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.






Leave a Reply

Your email address will not be published. Required fields are marked *